Intel® Fortran Compiler
Build applications that can scale for the future with optimized code designed for Intel® Xeon® and compatible processors.

one-sided communication and shared memory

Christoph_F_
Beginner
578 Views

Hello all,

I have come across the following problem with one-sided communication using MPI_ACCUMULATE. The versions are:
ifort (IFORT) 19.0.3.199 20190206
Intel(R) MPI Library for Linux* OS, Version 2019 Update 3 Build 20190214 (id: b645a4a54)

The attached program does a very basic calculation using one-sided communication with MPI_ACCUMULATE (and MPI_WIN_FENCE to synchronize). Compile it with

mpif90 test.f donothing.f

The program accepts a command line argument. For example,

mpiexec -np 1 ./a.out 10

simply runs the calculation ten times (on a single process).

When I run the program, it crashes with a segmentation fault in MPI_WIN_FENCE if the argument is larger than 8615. (Or around that number.) But only if one (!) process is used. For any other number of processes, the program run is successful!

When I set FI_PROVIDER to tcp (unset before), the behavior is different: Then, the program run gets stuck for an argument larger than 12, and for very large arguments, the program crashes with "Fatal error in PMPI_Win_fence: Other MPI error".

(The dummy routine "donothing" is a substitution for "mpi_f_sync_reg", which does not exist in this version of IntelMPI.)

Thank you.

Best wishes
Christoph

0 Kudos
2 Replies
Steve_Lionel
Honored Contributor III
578 Views

I would suggest that you post this in https://software.intel.com/en-us/forums/intel-clusters-and-hpc-technology as it does not seem to be compiler-related.

0 Kudos
Christoph_F_
Beginner
578 Views

Yes, true, thank you. I have just posted it in the HPC forum.

0 Kudos
Reply