- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
When relying on shm for single-node execution, a call to MPI_Ibsend leads to a segfault (see attached minimal reproducer):
```
$ mpicc testIbsend.c
$ mpiexec --version
Intel(R) MPI Library for Linux* OS, Version 2021.6 Build 20220227
$ env I_MPI_FABRICS=shm mpiexec -hosts=localhost -n 2 ./a.out
Ready: 1 of 2 tasks.
Ready: 0 of 2 tasks.
Signing off: 1 of 2 tasks.
===================================================================================
= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
= RANK 0 PID 2573295 RUNNING AT localhost
= KILLED BY SIGNAL: 11 (Segmentation fault)
===================================================================================
===================================================================================
= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
= RANK 1 PID 2573296 RUNNING AT localhost
= KILLED BY SIGNAL: 9 (Killed)
===================================================================================
```
The second process successfully receives the message and signs off.
gdb points to a null-pointer access in:
https://github.com/pmodels/mpich/blob/3.4.x/src/mpi/pt2pt/ibsend.c#L38
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
Thanks for posting in Intel Communities.
In order to resolve this issue, we kindly ask that you upgrade to the latest version of Intel MPI(2021.9) as we didn't face any problems with latest version.
Also could you please try compiling the MPI program with Intel compilers(i.e. mpiicc instead of mpicc)?
Please check the below attached screenshot:
If the problem persists even after upgrading to the latest Intel MPI 2021.9, please provide the specifications of the hardware and operating system that you are currently using.
Thanks & Regards,
Shaik Rabiya
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
Thanks for posting in Intel Communities.
In order to resolve this issue, we kindly ask that you upgrade to the latest version of Intel MPI(2021.9) as we didn't face any problems with latest version.
Also could you please try compiling the MPI program with Intel compilers(i.e. mpiicc instead of mpicc)?
Please check the below attached screenshot:
If the problem persists even after upgrading to the latest Intel MPI 2021.9, please provide the specifications of the hardware and operating system that you are currently using.
Thanks & Regards,
Shaik Rabiya
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
as suggested I tested with the latest Intel MPI and could not reproduce the issue. I'll ask our sysadmins to roll out the latest version of Intel MPI on our cluster.
Thanks
Joachim
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
Thanks for accepting our solution. If you have need any additional information, please raise a new question as this thread will no longer be monitored by Intel.
Thanks & Regards,
Shaik Rabiya

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page