- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
Hi, there,
In MPI_5.0.3, the MPI_TAG_UB is set to be 1681915906. But internally, the upper bound is 2^29 = 536870912, as tested out by the code attached.
Same code will run just fine in MPI 4.0.3.
Just to let you guys know the problem.
Hope to see the fix soon. Thanks.
Xudong
Encl:
2-1. Source Code
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
program main
implicit none
include"mpif.h"
real :: ibuf(10000)
integer ,save :: tag=2**29-1,ierr
integer :: req(MPI_STATUS_SIZE,2)
CALL MPI_INIT(ierr)
write(*,*)"Tag_UB=", MPI_TAG_UB, TAG
CALL MPI_IRECV(ibuf, 1000_4, MPI_REAL, 0, tag, MPI_COMM_WORLD, req(1,1), ierr)
write(*,*)"Pass ..."
tag = tag+1
write(*,*)"Tag_UB=", MPI_TAG_UB, TAG
CALL MPI_IRECV(ibuf, 1000_4, MPI_REAL, 0, tag, MPI_COMM_WORLD, req(1,2), ierr)
PAUSE
end
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
2-2. The screen output is as follows:
Tag_UB= 1681915906 536870911
Pass ...
Tag_UB= 1681915906 536870912
Fatal error in MPI_Irecv: Invalid tag, error stack:
MPI_Irecv(165): MPI_Irecv(buf=0x6a4fc0, count=1000, MPI_REAL, src=0, tag=536870912, MPI_COMM_WORLD, request=0x6aec00) failed
MPI_Irecv(109): Invalid tag, value is 536870912
링크가 복사됨
- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
Hi Xudong,
Thanks for getting this reported to us! I've submitted an internal bug report to our development team. I'll update you again once a fix is available.
Best regards,
~Gergana
- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
A quick update here. This issue has been fixed and will be included in our upcoming Intel MPI 5.1 version (to be released later this summer). We can provide you with an early engineering build for testing. You're welcome to send me a direct message with that request.
Regards,
~Gergana
- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
Dear Gergana,
I get same error with MPI Library 2017 Update 1
$ mpiifort -v
mpiifort for the Intel(R) MPI Library 2017 Update 1 for Linux*
Copyright(C) 2003-2016, Intel Corporation. All rights reserved.
ifort version 17.0.1
$ mpiifort test.F90; mpirun -np 2 ./a.out
Tag_UB= 1681915906 2097152
Fatal error in MPI_Irecv: Invalid tag, error stack:
MPI_Irecv(170): MPI_Irecv(buf=0x6b9060, count=1000, MPI_REAL, src=0, tag=2097152, MPI_COMM_WORLD, request=0x6c2ca0) failed
MPI_Irecv(109): Invalid tag, value is 2097152
Happy new year 2017
Pierre
- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
Same issue on send.
On my local cluster, one Xeon and one KNL, both running Intel PS 17.0.1 - no problem.
Recently (this month), in running on the Colfax Cluster, with Intel PS 17.0.2, where I build on my system, scp program to CC, I receive the same invalid tag error.
FWIW, I am sending/receiving a null length message with the tag value used as a dispatch index. The code uses MPI_TAG_UB value as a "nothing left to do" indicator. My hack resolution is to
#define MPI_TAG_UB_broken 999999
Then use that. I'd rather use a predefined upper bound. I accept the fact that different versions of MPI may have different values for the upper bound tag. Therefor, I would like to suggest that there be a function that can be called to obtain the limits as used by the specific MPI library used by the application at run time.
Jim Dempsey
- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
The bug is coming back with the 2019 beta version:
$ mpiifort -v
mpiifort for the Intel(R) MPI Library 2019 Beta for Linux*
Copyright 2003-2018, Intel Corporation.
ifort version 19.0.0.046 Beta
$ mpirun -np 2 ./a.out
Tag_UB= 1681915906 8454145
Abort(67744004) on node 1 (rank 1 in comm 0): Fatal error in MPI_Irecv: Invalid tag, error stack:
MPI_Irecv(156): MPI_Irecv(buf=0x6bb0a0, count=1000, MPI_REAL, src=0, tag=8454145, MPI_COMM_WORLD, request=0x6c4ce0) failed
MPI_Irecv(100): Invalid tag, value is 8454145
Tag_UB= 1681915906 8454145
[cli_1]: readline failed
Thank you very much to give me the opportunity to test this new version.
- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
Hello,
Regarding this:
I would like to suggest that there be a function that can be called to obtain the limits as used by the specific MPI library used by the application at run time.
MPI has such function, here is example:
int tag_ub = 32767; int flag; int* tag_ub_ptr; MPI_Comm_get_attr(MPI_COMM_WORLD, MPI_TAG_UB, &tag_ub_ptr, &flag); if (flag) tag_ub = *tag_ub_ptr;
- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
This bug seems still to not be fixed? Tried 2020 and 2019 versions?
- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
@jimdempseyatthecove your solutions works , thanks. Although, I am not sure why it works.
