We have solved the Laplace equation using the standard blocking communication. My case is different since I have a huge CFD code with more than 32 arrays that need to be updated each time step. Therefore, I decided to use the persistent communication structures: MPI_SEND_INIT and MPI_RECV_INIT. These instruction are in a separated subroutine that I call at the beginning of my simulation, in order to set the communication properties. This is a short example of one of the arrays I use. AS you see I use Fortran MPI; more specifically mpiifort
Unfortunately, it seems that it is not working, because the value at the ghost cells are zeroes. It seems that the information is not either sent or received properly. In the main loop I call start the communication using:
Like, I said, both instruction are in different subroutines but I compiled everything. The code runs "without" problems, but the solution is wrong because I only have zeroes on my ghost values.
Does anyone have the Laplace solution using persistent communication ? or experience using this approach?A similar approach is followed in the textbook MPI: THe complete Reference, but it seems it does not work in my case.
Thanks before hand!!!
- Parallel Computing
While I haven't used persistent communication (and may be entirely wrong about this), in looking at your CALL MPI_SEND_INIT argument list I see something that may be a problem (may being stressed)
u_old(2,:) is non-unit stride. Meaning the argument expresses a non-contiguous chunk of memory, and thus may generate a temporary containing the contents of that array section at the time of the call. Same with the other three calls listed above. If this is true then this would explain part of the symptom you describe. The other part is why your program did not crash as the I/O would be occurring after the temporary is returned (to stack or heap).
To correct for this, re-define your arrays such that the data specified is contiguous. IOW your indexes are transposed
(requires all usage to have transposed the indexes).
Thank you very much Mr.Dempsey;
I followed your advice and I did not work. In fact i agree with you. Unfortunately, they way the code is structured is in that way that disagree with FORTRAN column wise. The work I have to do is huge, therefore I need to stick with the current format. I will try other options to see if somethig kicks in.
That first argument is a pointer to a contiguous buffer. Not clear how use of the Fortran colon syntax makes sense there. The compiler may be turning that line into a loop and the last call in the loop points to the final value in the array.
Thank you very much for your comments. I followed the advice from the web. Basically I am working with a section of the data that is not contiguous. therefore, I used the MPI_TYPE_VECTOR to solve the issue but still the problem persists. I am using allocatable arrays, to adjust the size of each arrays to the needs of each processor. Still, I see zeroes on the ghost plane. In the first time step the ghost cells shows only zeroes. In the second time step, the first 5 elements have random numbers, and the rest are zeroes.
Here is the main program: