Analyzers
Talk to fellow users of Intel Analyzer tools (Intel VTune™ Profiler, Intel Advisor)

MPI_Send/ MPI_Rcv don't work with more then 8182 double

Paolo_M_
Beginner
521 Views
Hi, I'm having some troubles with the attached code SendReceive.c. The idea is to open a dataset with process p-1 and then to distribute it to the remaining processes. This solution works when the variable ln (local number of elements) is less than 8182. When I increase the number of elements I've the following error: mpiexec -np 2 ./sendreceive 16366 Process 0 is receiving 8183 elements from process 1 Process 1 is sending 8183 elements to process 0 Fatal error in MPI_Recv: Other MPI error, error stack: MPI_Recv(224)...................: MPI_Recv(buf=0x2000590, count=8183, MPI_DOUBLE, src=1, tag=MPI_ANY_TAG, MPI_COMM_WORLD, status=0x1) failed PMPIDI_CH3I_Progress(623).......: fail failed pkt_RTS_handler(317)............: fail failed do_cts(662).....................: fail failed MPID_nem_lmt_dcp_start_recv(288): fail failed dcp_recv(154)...................: Internal MPI error! cannot read from remote process I'm using the student license of the intel implementation of mpi (obtained by installing Intel® Parallel Studio XE Cluster Edition (includes Fortran and C/C++)). Is this a limitation of the licence? Otherwise, what I'm doing wrong?
0 Kudos
4 Replies
Kevin_O_Intel1
Employee
521 Views

 

Hi,

It is not a limitation in the license.

Glancing at your code it looks like it is your buffer size that is the issue.

16322/2 = 8183... this the size of the buffer you declare

0 Kudos
Paolo_M_
Beginner
521 Views

Hi Kevin,

as you can see in the code below the size of the buffer is ln, as the number of elements sent. Why should it be the problem?

buffer = (double*)calloc(ln, sizeof(double));

retCode = MPI_Ssend (buffer, ln, MPI_DOUBLE, i, 0, MPI_COMM_WORLD);

retCode = MPI_Recv (buffer, ln, MPI_DOUBLE, p-1, MPI_ANY_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);

 

0 Kudos
Kevin_O_Intel1
Employee
521 Views

 

Hi Paolo,

I think this issue would be better handled in this forum: https://software.intel.com/en-us/forums/intel-clusters-and-hpc-technology/

Kevin

0 Kudos
Paolo_M_
Beginner
521 Views
0 Kudos
Reply