Intel® oneAPI HPC Toolkit
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.
Announcements
Welcome to the Intel Community. If you get an answer you like, please mark it as an Accepted Solution to help others. Thank you!

MPI: low upper bound for tag

LionelL
Beginner
307 Views

Hello  community members,

Using OneAPI HPC toolkit with MPI 2021.1.1, I have compiled on my workstation a large code running nicely on supercomputers with Intel compiler/MPI 2017 environment.

When I run it on my workstation, there is a message telling me that the tag for MPI SEND (or RECV) is invalid. It appears that internally the upper bound for the tag is very low, being equal to 2**19-1, whereas the one advertised by MPI_TAG_UB is much higher (1681915906)

The problem can be reproduced by running the test program listed hereafter, whose output reads:

> 1234 sent from 0 with tag 524286 < 1681915906
> 1234 received by 1 with tag 524286 < 1681915906
> 1234 sent from 0 with tag 524287 < 1681915906
> 1234 received by 1 with tag 524287 < 1681915906
> Abort(738838276) on node 1 (rank 1 in comm 0): Fatal error in PMPI_Recv: Invalid tag, error stack:
> PMPI_Recv(173): MPI_Recv(buf=0x4b7408, count=1, dtype=0x4c000430, src=0, tag=524288, MPI_COMM_WORLD, status=0x4ba280) failed
> PMPI_Recv(105): Invalid tag, value is 524288
> Abort(537516036) on node 0 (rank 0 in comm 0): Fatal error in PMPI_Send: Invalid tag, error stack:
> PMPI_Send(159): MPI_Send(buf=0x4b7408, count=1, dtype=0x4c000430, dest=1, tag=524288, MPI_COMM_WORLD) failed
> PMPI_Send(97).: Invalid tag, value is 524288

Any suggestion to overcome this difficulty is welcome since changing all the tags throughout the code would be cumbersome.

Best regards,

Lionel

program test_MPI
implicit none
include 'mpif.h'
integer*4 void
integer*4 sizempi,rankmpi,tagmpi,ierrmpi
integer*4 statusmpi(MPI_STATUS_SIZE)
call MPI_INIT(ierrmpi)
call MPI_COMM_SIZE(MPI_COMM_WORLD,sizempi,ierrmpi)
call MPI_COMM_RANK(MPI_COMM_WORLD,rankmpi,ierrmpi)
void=1234
if(sizempi.eq.2) then
do tagmpi=2**19-2,2**19
if(rankmpi.eq.0) then
call MPI_SEND(void,1_4,MPI_INTEGER4,1_4,tagmpi,MPI_COMM_WORLD,ierrmpi)
write(*,'(i0,a,i0,a,i0)') void, ' sent from 0 with tag ',tagmpi,' < ',MPI_TAG_UB
else
call MPI_RECV(void,1_4,MPI_INTEGER4,0_4,tagmpi,MPI_COMM_WORLD,statusmpi,ierrmpi)
write(*,'(i0,a,i0,a,i0)') void,' received by 1 with tag ',tagmpi,' < ',MPI_TAG_UB
endif
call MPI_BARRIER(MPI_COMM_WORLD,ierrmpi)
enddo
endif
end program

0 Kudos
2 Replies
LionelL
Beginner
294 Views

I forgot to add information on the OS/CPU. Here they are.

Ubuntu 20.04.1 LTS with gcc 9.3.0

2*Xeon Gold 6240

Steve_Lionel
Black Belt Retired Employee
266 Views
Reply