Intel® oneAPI HPC Toolkit
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.
The Intel sign-in experience has changed to support enhanced security controls. If you sign in, click here for more information.
2019 Discussions

[UPDATED] : Maximum MPI Buffer Dimension



there is a maximum dimension in MPI buffer size? I have a buffer dimension problem with my MPI code when trying to MPI_Pack large arrays. The offending instruction is the first pack call:


where the double precision array R has LVB=6331625 elements, BUF = 354571000, and LBUF = BUF*8 = 2836568000 (since I have to send other 6 arrays with the same dimension as R).

The error output is the following:

Fatal error in PMPI_Pack: Invalid count, error stack:
PMPI_Pack(272): MPI_Pack(inbuf=0x2b4384000010, incount=6331625, MPI_DOUBLE_PRECISION, outbuf=0x2b51e593d010, outcount=-1458399296, position=0x7fffe24fbaa8, MPI_COMM_WORLD) failed
PMPI_Pack(190): Negative count, value is -1458399296

It is a Fortran 2008 code on Intel MPI on a cluster with Infiniband connection between nodes, here are the versions:

ifort (IFORT) 15.0.0 20140723

Intel(R) MPI Library for Linux* OS, Version 5.0 Update 1 Build 20140709

So, how can I solve the problem? There is some environment variable to seto in order to increase buffer limit size? I could break up mpi_pack using multiple mpi_send (it is a code routine that is executed once at startup, so performances are not an issue), but before doing this, I would like to be sure about the problem.

Thank you in advance.



I found the issue: simply, the count index in Intel MPI implementation is a 4-byte integer, so the maximum allowed value is 2^31-1 : 2147483647. So it is not possible to pack or send a larger object. For now, I have solved sending multiple times instead of packing and sending once. Can anyone suggest me a smarter solution?

0 Kudos
0 Replies