Intel® Moderncode for Parallel Architectures
Support for developing parallel programming applications on Intel® Architecture.
Announcements
FPGA community forums and blogs on community.intel.com are migrating to the new Altera Community and are read-only. For urgent support needs during this transition, please visit the FPGA Design Resources page or contact an Altera Authorized Distributor.

Differentiating Broadcast and Recv messages

Jimmy821
Beginner
1,180 Views
Hi,

My code performs broadcasting and transmission using bcast and isend simultaneously.

At the receiving end, how can I differentiate between bcast and isend messages? The program is multi-threaded with a receiving end for broadcast messages and received messages. I discover that if I incorrectly accepts an incoming data with ibcast function, the program would crash. How can I best resolved this?

Thanks for helping.

Kind Regards,
Jimmy
0 Kudos
2 Replies
Henry_G_Intel
Employee
1,180 Views
Hi Jimmy,

MPI_Bcast is a collective operation. There's no mechanism for it to accept messages from apoint-to-point function like MPI_Isend.Are youdoing something like this?

[cpp]switch(rank) { 
    case 0: 
        MPI_Bcast(buf1, count, type, 0, comm); 
        MPI_Send(buf2, count, type, 1, tag, comm); 
        break; 
    case 1: 
        MPI_Recv(buf2, count, type, 0, tag, comm, status); 
        MPI_Bcast(buf1, count, type, 0, comm); 
        break; 
}[/cpp]

This code is incorrect because the MPI processes reverse theorder of point-to-point and collective communication. The MPI Forum shows several examples of erroneous collective operations: http://www.mpi-forum.org/docs/mpi-11-html/node86.html#Node86. Can you post some pseudocode illustrating what your program is doing?

Best regards,
Henry

0 Kudos
Jimmy821
Beginner
1,180 Views
Hi Jimmy,

MPI_Bcast is a collective operation. There's no mechanism for it to accept messages from apoint-to-point function like MPI_Isend.Are youdoing something like this?

[cpp]switch(rank) { 
case 0:
MPI_Bcast(buf1, count, type, 0, comm);
MPI_Send(buf2, count, type, 1, tag, comm);
break;
case 1:
MPI_Recv(buf2, count, type, 0, tag, comm, status);
MPI_Bcast(buf1, count, type, 0, comm);
break;
}[/cpp]

This code is incorrect because the MPI processes reverse theorder of point-to-point and collective communication. The MPI Forum shows several examples of erroneous collective operations: http://www.mpi-forum.org/docs/mpi-11-html/node86.html#Node86. Can you post some pseudocode illustrating what your program is doing?

Best regards,
Henry


I wanted to distribute out data as a broadcast to all the slave threads.

However, I require the slave threads to return different types of objects back to the master which perform the broadcast. A simple bcast operation is not sufficient because it does not different the types.

In addition, I may want to perform inter-process communications in between the slave threads. I feel riding on top of MPI infrusturture is the best.

Thanks.
0 Kudos
Reply