- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello all,
I have come across the following problem with one-sided communication using MPI_ACCUMULATE. The versions are:
ifort (IFORT) 19.0.3.199 20190206
Intel(R) MPI Library for Linux* OS, Version 2019 Update 3 Build 20190214 (id: b645a4a54)
The attached program does a very basic calculation using one-sided communication with MPI_ACCUMULATE (and MPI_WIN_FENCE to synchronize). Compile it with
mpif90 test.f donothing.f
The program accepts a command line argument. For example,
mpiexec -np 1 ./a.out 10
simply runs the calculation ten times (on a single process).
When I run the program, it crashes with a segmentation fault in MPI_WIN_FENCE if the argument is larger than 8615. (Or around that number.) But only if one (!) process is used. For any other number of processes, the program run is successful!
When I set FI_PROVIDER to tcp (unset before), the behavior is different: Then, the program run gets stuck for an argument larger than 12, and for very large arguments, the program crashes with "Fatal error in PMPI_Win_fence: Other MPI error".
(The dummy routine "donothing" is a substitution for "mpi_f_sync_reg", which does not exist in this version of IntelMPI.)
Thank you.
Best wishes
Christoph
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I would suggest that you post this in https://software.intel.com/en-us/forums/intel-clusters-and-hpc-technology as it does not seem to be compiler-related.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Yes, true, thank you. I have just posted it in the HPC forum.

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page