Intel® MPI Library
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.
2272 Discussions

Bug Report: MPI_Ibsend leads to a segfault

jprotze
Beginner
1,745 Views

When relying on shm for single-node execution, a call to MPI_Ibsend leads to a segfault (see attached minimal reproducer):

```

$ mpicc testIbsend.c

$ mpiexec --version
Intel(R) MPI Library for Linux* OS, Version 2021.6 Build 20220227

$ env I_MPI_FABRICS=shm mpiexec -hosts=localhost -n 2 ./a.out

Ready: 1 of 2 tasks.
Ready: 0 of 2 tasks.
Signing off: 1 of 2 tasks.

===================================================================================
= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
= RANK 0 PID 2573295 RUNNING AT localhost
= KILLED BY SIGNAL: 11 (Segmentation fault)
===================================================================================

===================================================================================
= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
= RANK 1 PID 2573296 RUNNING AT localhost
= KILLED BY SIGNAL: 9 (Killed)
===================================================================================

```

The second process successfully receives the message and signs off.

 

gdb points to a null-pointer access in:

https://github.com/pmodels/mpich/blob/3.4.x/src/mpi/pt2pt/ibsend.c#L38

0 Kudos
1 Solution
RabiyaSK_Intel
Employee
1,718 Views

Hi,

 

Thanks for posting in Intel Communities.

 

In order to resolve this issue, we kindly ask that you upgrade to the latest version of Intel MPI(2021.9) as we didn't face any problems with latest version.

 

Also could you please try compiling the MPI program with Intel compilers(i.e. mpiicc instead of mpicc)?

 

Please check the below attached screenshot:

RabiyaSK_Intel_0-1685098749303.png

 

If the problem persists even after upgrading to the latest Intel MPI 2021.9, please provide the specifications of the hardware and operating system that you are currently using.

 

Thanks & Regards,

Shaik Rabiya

 

View solution in original post

0 Kudos
3 Replies
RabiyaSK_Intel
Employee
1,719 Views

Hi,

 

Thanks for posting in Intel Communities.

 

In order to resolve this issue, we kindly ask that you upgrade to the latest version of Intel MPI(2021.9) as we didn't face any problems with latest version.

 

Also could you please try compiling the MPI program with Intel compilers(i.e. mpiicc instead of mpicc)?

 

Please check the below attached screenshot:

RabiyaSK_Intel_0-1685098749303.png

 

If the problem persists even after upgrading to the latest Intel MPI 2021.9, please provide the specifications of the hardware and operating system that you are currently using.

 

Thanks & Regards,

Shaik Rabiya

 

0 Kudos
jprotze
Beginner
1,707 Views

Hi,

 

as suggested I tested with the latest Intel MPI and could not reproduce the issue. I'll ask our sysadmins to roll out the latest version of Intel MPI on our cluster.

 

Thanks

Joachim

0 Kudos
RabiyaSK_Intel
Employee
1,683 Views

Hi,


Thanks for accepting our solution. If you have need any additional information, please raise a new question as this thread will no longer be monitored by Intel.


Thanks & Regards,

Shaik Rabiya 


0 Kudos
Reply