Intel® MPI Library
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.
2226 Discussions

MPI one sided communication with malloc vs MPI_Alloc_mem

csnatarajan
Beginner
1,317 Views
Hello,
Does Intel MPI provide support for one sided communication (MPI_Win_lock, MPI_Put, MPI_Win_unlock) with malloc or should the allc be done with MPI_Alloc_Mem? The standard is very implementor biased and I couldn't find any help through the forums or documentation.

Thanks,
CSN
0 Kudos
2 Replies
James_T_Intel
Moderator
1,317 Views
Hi CSN,

Either method of allocation worksin the Intel MPI Library. In a small test (2 processes on the same computer, tried in Windows* and Linux*, just transferring a single float array), I did not see a performance difference between MPI_Alloc_mem and malloc. This could very well change for a different scenario, but switching between the two is not difficult.

[cpp]float *a;
int nElements;
#ifdef USE_MPI_MALLOC MPI_Alloc_mem(nElements*sizeof(float), MPI_INFO_NULL, &a); #else a=(float*) malloc(nElements*sizeof(float)); #endif ... #ifdef USE_MPI_MALLOC MPI_Free_mem(a); #else free(a); #endif[/cpp]
Sincerely,
James Tullos
Technical Consulting Engineer
Intel Cluster Tools
0 Kudos
csnatarajan
Beginner
1,317 Views
Hi James,
Thanks for the quick response, this is great news. I have my own array library and this requires support for upto 5D arrays and so I didn't want to re write the library w/o confirmation one way or the other.

Cheers,
C.S.N
0 Kudos
Reply