Intel® MPI Library
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.
2154 Discussions

Intel oneAPI 2021.4 SHM Issue?

stefan-maxar
Novice
4,314 Views

Hello! We recently upgraded to Intel oneAPI 2021.4 (base kit and HPC kit) from 2021.2 for our HPC applications. 

One of our applications, which runs on a single node via SLURM, begins to execute and then exits with the following error (the application ran without issue with 2021.2):

Assertion failed in file ../../src/mpid/ch4/shm/posix/eager/include/intel_transport_send.h at line 568: actual_pack_bytes == frame_sz
/shared/intel/mpi/2021.4.0//lib/release/libmpi.so.12(MPL_backtrace_show+0x1c) [0x154acf30ac8c]
/shared/intel/mpi/2021.4.0//lib/release/libmpi.so.12(MPIR_Assert_fail+0x21) [0x154aced86fe1]
/shared/intel/mpi/2021.4.0//lib/release/libmpi.so.12(+0x7ff893) [0x154acf04d893]
/shared/intel/mpi/2021.4.0//lib/release/libmpi.so.12(+0x7f8bcd) [0x154acf046bcd]
/shared/intel/mpi/2021.4.0//lib/release/libmpi.so.12(+0x801669) [0x154acf04f669]
/shared/intel/mpi/2021.4.0//lib/release/libmpi.so.12(+0x501e58) [0x154aced4fe58]
/shared/intel/mpi/2021.4.0//lib/release/libmpi.so.12(MPI_Isend+0x8cc) [0x154aced5294c]
/shared/intel/mpi/2021.4.0//lib/release/libmpi.so.12(+0x88715) [0x154ace8d6715]
/shared/intel/mpi/2021.4.0//lib/release/libmpi.so.12(+0x8743d) [0x154ace8d543d]
/shared/intel/mpi/2021.4.0//lib/release/libmpi.so.12(PMPI_File_read_all+0x144) [0x154acf32a324]
/shared/intel/mpi/2021.4.0//lib/libmpifort.so.12(pmpi_file_read_all_+0x58) [0x154ad0003458]


After some debugging, disabling the SHM provider by setting export I_MPI_FABRICS=ofi fixes the issue and the application executes as expected. The application is run via a script that is submitted to SLURM by sbatch and the direct run of the application is made using mpiexec. The application is only run on a single node, so no internode communication is performed.

The following is our configuration:

OS: AmazonLinux2 (AWS)

Interconnect: EFA (AWS)

SLURM version: 20.11.7

OFI Provider: AWS Libfabric 1.13.2

Some environmental variables set:

export I_MPI_OFI_LIBRARY_INTERNAL=0
export I_MPI_OFI_PROVIDER="efa"
export I_MPI_PIN_RESPECT_CPUSET=0
export I_MPI_HYDRA_BOOTSTRAP=slurm
export I_MPI_HYDRA_RMK=slurm
export KMP_STACKSIZE=2G
export KMP_AFFINITY="scatter"
export I_MPI_PIN=1
export I_MPI_EXTRA_FILESYSTEM=1
export I_MPI_EXTRA_FILESYSTEM_FORCE="lustre"

 

Thanks for any insight you can provide!

Labels (1)
0 Kudos
1 Solution
SantoshY_Intel
Moderator
3,109 Views

Hi,


Thank you for your patience. The issue raised by you has been fixed in Intel MPI 2021.6 version(HPC Toolkit 2022.2). Please download and let us know if this resolves your issue.


Thanks & Regards,

Santosh


View solution in original post

0 Kudos
14 Replies
SantoshY_Intel
Moderator
4,267 Views

Hi,

 

Thanks for reaching out to us.

 

>>"the application ran without issue with 2021.2"

    Could you please confirm whether your application ran successfully using I_MPI_FABRICS=shm with Intel oneAPI 2021.2?

 

Could you please provide us with a sample reproducer code along with the command line that you used inside the script for running the MPI application so that we can investigate more on your issue?

 

Also, please provide us with the complete debug log by using I_MPI_DEBUG and '-v' option in the command line using Intel oneAPI 2021.4.

Example: 

 

 I_MPI_DEBUG=10 I_MPI_FABRICS=shm mpiexec -v -n 1 -ppn 1 ./a.out

 

 

Could you please confirm whether you are facing the same issue while running any sample MPI program using I_MPI_FABRICS=shm with Intel oneAPI 2021.4?

 

Thanks & Regards,

Santosh

0 Kudos
stefan-maxar
Novice
4,228 Views

Hello! 

I was able to do some of the tests you requested and dig in a bit more. Increasing the I_MPI_DEBUG of the failed run reveals:

[0] MPI startup(): Intel(R) MPI Library, Version 2021.4  Build 20210831 (id: 758087adf)
[0] MPI startup(): Copyright (C) 2003-2021 Intel Corporation.  All rights reserved.
[0] MPI startup(): library kind: release
[0] MPI startup(): shm segment size (452 MB per rank) * (10 local ranks) = 4522 MB total
[0] MPI startup(): libfabric version: 1.13.2amzn1.0
[0] MPI startup(): libfabric provider: efa
[0] MPI startup(): File "" not found
[0] MPI startup(): Load tuning file: "/shared/intel/mpi/2021.4.0/etc/tuning_generic_shm-ofi.dat"
[0] MPI startup(): Rank    Pid      Node name                Pin cpu
[0] MPI startup(): 0       8126     [redacted]  {0}
[0] MPI startup(): 1       8127     [redacted]  {1}
[0] MPI startup(): 2       8128     [redacted]  {2}
[0] MPI startup(): 3       8129     [redacted]  {3}
[0] MPI startup(): 4       8130     [redacted]  {4}
[0] MPI startup(): 5       8131     [redacted]  {5}
[0] MPI startup(): 6       8132     [redacted]  {6}
[0] MPI startup(): 7       8133     [redacted]  {7}
[0] MPI startup(): 8       8134     [redacted]  {8}
[0] MPI startup(): 9       8135     [redacted]  {9}
[0] MPI startup(): I_MPI_OFI_LIBRARY_INTERNAL=0
[0] MPI startup(): I_MPI_ROOT=/shared/intel/mpi/2021.4.0
[0] MPI startup(): I_MPI_HYDRA_RMK=slurm
[0] MPI startup(): I_MPI_HYDRA_TOPOLIB=hwloc
[0] MPI startup(): I_MPI_HYDRA_BOOTSTRAP=slurm
[0] MPI startup(): I_MPI_PIN=1
[0] MPI startup(): I_MPI_PIN_RESPECT_CPUSET=0
[0] MPI startup(): I_MPI_INTERNAL_MEM_POLICY=default
[0] MPI startup(): I_MPI_EXTRA_FILESYSTEM=1
[0] MPI startup(): I_MPI_EXTRA_FILESYSTEM_FORCE=lustre
[0] MPI startup(): I_MPI_FABRICS=shm:ofi
[0] MPI startup(): I_MPI_OFI_PROVIDER=efa
[0] MPI startup(): I_MPI_DEBUG=10

 followed by the same errors (Assertation error) posted previously. 

Running using just SHM as the provider:

+ mpiexec -env I_MPI_DEBUG=10 [redacted]
+ 0< itag
[0] MPI startup(): Intel(R) MPI Library, Version 2021.4  Build 20210831 (id: 758087adf)
[0] MPI startup(): Copyright (C) 2003-2021 Intel Corporation.  All rights reserved.
[0] MPI startup(): library kind: release
[0] MPI startup(): shm segment size (452 MB per rank) * (10 local ranks) = 4522 MB total
[0] MPI startup(): File "" not found
[0] MPI startup(): Load tuning file: "/shared/intel/mpi/2021.4.0/etc/tuning_generic_shm.dat"
[0] MPI startup(): Rank    Pid      Node name                Pin cpu
[0] MPI startup(): 0       8780     [redacted]  {0}
[0] MPI startup(): 1       8781     [redacted]  {1}
[0] MPI startup(): 2       8782     [redacted]  {2}
[0] MPI startup(): 3       8783     [redacted]  {3}
[0] MPI startup(): 4       8784     [redacted]  {4}
[0] MPI startup(): 5       8785     [redacted]  {5}
[0] MPI startup(): 6       8786     [redacted]  {6}
[0] MPI startup(): 7       8787     [redacted]  {7}
[0] MPI startup(): 8       8788     [redacted]  {8}
[0] MPI startup(): 9       8789     [redacted]  {9}
[0] MPI startup(): I_MPI_OFI_LIBRARY_INTERNAL=0
[0] MPI startup(): I_MPI_ROOT=/shared/intel/mpi/2021.4.0
[0] MPI startup(): I_MPI_HYDRA_RMK=slurm
[0] MPI startup(): I_MPI_HYDRA_TOPOLIB=hwloc
[0] MPI startup(): I_MPI_HYDRA_BOOTSTRAP=slurm
[0] MPI startup(): I_MPI_PIN=1
[0] MPI startup(): I_MPI_PIN_RESPECT_CPUSET=0
[0] MPI startup(): I_MPI_INTERNAL_MEM_POLICY=default
[0] MPI startup(): I_MPI_EXTRA_FILESYSTEM=1
[0] MPI startup(): I_MPI_EXTRA_FILESYSTEM_FORCE=lustre
[0] MPI startup(): I_MPI_FABRICS=shm
[0] MPI startup(): I_MPI_OFI_PROVIDER=efa
[0] MPI startup(): I_MPI_DEBUG=10

....

forrtl: severe (174): SIGSEGV, segmentation fault occurred
Image              PC                Routine            Line        Source             
ncep_post          0000000000E9E8EA  Unknown               Unknown  Unknown
libpthread-2.26.s  00001553592C57E0  Unknown               Unknown  Unknown
libmpi.so.12.0.0   0000155359DE21E0  MPI_Sendrecv          Unknown  Unknown
libmpifort.so.12.  000015535AE9C515  pmpi_sendrecv         Unknown  Unknown
ncep_post          0000000000626185  exch_                      58  EXCH.f
ncep_post          00000000008CB5A3  initpost_gfs_nems         373  INITPOST_GFS_NEMS_MPIIO.f
ncep_post          00000000006F037D  MAIN__                    803  WRFPOST.f
ncep_post          0000000000410442  Unknown               Unknown  Unknown
libc-2.26.so       00001553585C613A  __libc_start_main     Unknown  Unknown
ncep_post          000000000041036A  Unknown               Unknown  Unknown


So given it threw a segmentation fault with just SHM I figured it could be related to the SHM segment size given per rank. Interestingly, if I run the task with more MPI ranks, using 30 in the below log, it works as expected:

+ mpiexec -env I_MPI_DEBUG=10 [redacted]
+ 0< itag
[0] MPI startup(): Intel(R) MPI Library, Version 2021.4  Build 20210831 (id: 758087adf)
[0] MPI startup(): Copyright (C) 2003-2021 Intel Corporation.  All rights reserved.
[0] MPI startup(): library kind: release
[0] MPI startup(): shm segment size (349 MB per rank) * (30 local ranks) = 10488 MB total
[0] MPI startup(): libfabric version: 1.13.2amzn1.0
[0] MPI startup(): libfabric provider: efa
[0] MPI startup(): File "" not found
[0] MPI startup(): Load tuning file: "/shared/intel/mpi/2021.4.0/etc/tuning_generic_shm-ofi.dat"
[0] MPI startup(): Rank    Pid      Node name                Pin cpu
[0] MPI startup(): 0       11297    [redacted]  {0}
[0] MPI startup(): 1       11298    [redacted]  {1}
[0] MPI startup(): 2       11299    [redacted]  {2}
[0] MPI startup(): 3       11300    [redacted]  {3}
[0] MPI startup(): 4       11301    [redacted]  {4}
[0] MPI startup(): 5       11302    [redacted]  {5}
[0] MPI startup(): 6       11303    [redacted]  {6}
[0] MPI startup(): 7       11304    [redacted]  {7}
[0] MPI startup(): 8       11305    [redacted]  {8}
[0] MPI startup(): 9       11306    [redacted]  {9}
[0] MPI startup(): 10      11307    [redacted]  {10}
[0] MPI startup(): 11      11308    [redacted]  {11}
[0] MPI startup(): 12      11309    [redacted]  {12}
[0] MPI startup(): 13      11310    [redacted]  {13}
[0] MPI startup(): 14      11311    [redacted]  {14}
[0] MPI startup(): 15      11312    [redacted]  {15}
[0] MPI startup(): 16      11313    [redacted]  {16}
[0] MPI startup(): 17      11314    [redacted]  {17}
[0] MPI startup(): 18      11315    [redacted]  {18}
[0] MPI startup(): 19      11316    [redacted]  {19}
[0] MPI startup(): 20      11317    [redacted]  {20}
[0] MPI startup(): 21      11318    [redacted]  {21}
[0] MPI startup(): 22      11319    [redacted]  {22}
[0] MPI startup(): 23      11320    [redacted]  {23}
[0] MPI startup(): 24      11321    [redacted]  {24}
[0] MPI startup(): 25      11322    [redacted]  {25}
[0] MPI startup(): 26      11323    [redacted]  {26}
[0] MPI startup(): 27      11324    [redacted]  {27}
[0] MPI startup(): 28      11325    [redacted]  {28}
[0] MPI startup(): 29      11326    [redacted]  {29}
[0] MPI startup(): I_MPI_OFI_LIBRARY_INTERNAL=0
[0] MPI startup(): I_MPI_ROOT=/shared/intel/mpi/2021.4.0
[0] MPI startup(): I_MPI_HYDRA_RMK=slurm
[0] MPI startup(): I_MPI_HYDRA_TOPOLIB=hwloc
[0] MPI startup(): I_MPI_HYDRA_BOOTSTRAP=slurm
[0] MPI startup(): I_MPI_PIN=1
[0] MPI startup(): I_MPI_PIN_RESPECT_CPUSET=0
[0] MPI startup(): I_MPI_INTERNAL_MEM_POLICY=default
[0] MPI startup(): I_MPI_EXTRA_FILESYSTEM=1
[0] MPI startup(): I_MPI_EXTRA_FILESYSTEM_FORCE=lustre
[0] MPI startup(): I_MPI_FABRICS=shm:ofi
[0] MPI startup(): I_MPI_OFI_PROVIDER=efa
[0] MPI startup(): I_MPI_DEBUG=10

 

So, thinking that each rank was running out of SHM, I tried to increase the SHM heap sizes using I_MPI_SHM_HEAP_VSIZE and I_MPI_SHM_HEAP_CSIZE when running with only 10 ranks. This did not work, unfortunately, even setting an overly large segment size of I_MPI_SHM_HEAP_VSIZE=24576 per rank. The /dev/shm of the node is plenty large for the task (185GB) and the SHM kernel limits are sufficiently large enough as well.

Are there any other SHM-related environmental variables I can attempt to modify the allocation behavior on a per-rank basis? It seems like the default behavior for allocation has possibly changed between 2021.2 and 2021.4 (?) considering using 2021.2 with 10 ranks works fine with standard allocation while it looks like 2021.4 is blowing through the available stack. The fact that 2021.4 can run the application with shm:ofi as the providers with 30 ranks is promising...but some knob needs to be turned up some for 10 ranks!

0 Kudos
SantoshY_Intel
Moderator
4,161 Views

Hi,

 

Thanks for providing the debug logs.

 

Could you please try using the below environment variable and try running the MPI program? Also, provide us with the complete debug log.

 

export FI_EFA_ENABLE_SHM_TRANSFER=0

 

 

Thanks & Regards,

Santosh

0 Kudos
SantoshY_Intel
Moderator
4,109 Views

Hi,


We haven't heard back from you. Could you please provide us with the complete debug log after setting FI_EFA_ENABLE_SHM_TRANSFER=0?


Thanks & Regards,

Santosh



0 Kudos
stefan-maxar
Novice
4,073 Views

Hello! Sorry for the delay. I did a test with FI_EFA_ENABLE_SHM_TRANSFER=0 (and I_MPI_FABRICS=shm:ofi) and had the same error as previously:

Assertion failed in file ../../src/mpid/ch4/shm/posix/eager/include/intel_transport_send.h at line 568: actual_pack_bytes == frame_sz
/shared/intel/mpi/2021.4.0//lib/release/libmpi.so.12(MPL_backtrace_show+0x1c) [0x154acf30ac8c]
/shared/intel/mpi/2021.4.0//lib/release/libmpi.so.12(MPIR_Assert_fail+0x21) [0x154aced86fe1]
/shared/intel/mpi/2021.4.0//lib/release/libmpi.so.12(+0x7ff893) [0x154acf04d893]
/shared/intel/mpi/2021.4.0//lib/release/libmpi.so.12(+0x7f8bcd) [0x154acf046bcd]
/shared/intel/mpi/2021.4.0//lib/release/libmpi.so.12(+0x801669) [0x154acf04f669]
/shared/intel/mpi/2021.4.0//lib/release/libmpi.so.12(+0x501e58) [0x154aced4fe58]
/shared/intel/mpi/2021.4.0//lib/release/libmpi.so.12(MPI_Isend+0x8cc) [0x154aced5294c]
/shared/intel/mpi/2021.4.0//lib/release/libmpi.so.12(+0x88715) [0x154ace8d6715]
/shared/intel/mpi/2021.4.0//lib/release/libmpi.so.12(+0x8743d) [0x154ace8d543d]
/shared/intel/mpi/2021.4.0//lib/release/libmpi.so.12(PMPI_File_read_all+0x144) [0x154acf32a324]
/shared/intel/mpi/2021.4.0//lib/libmpifort.so.12(pmpi_file_read_all_+0x58) [0x154ad0003458]

 

However, I did have a promising find after some further digging. With I_MPI_FABRICS=shm:ofi, I set I_MPI_SHM_CELL_BWD_SIZE=2048000 and the application worked as expected with 10 threads. Setting I_MPI_SHM_CELL_BWD_SIZE=1024000 yielded the same error as above. So, increasing the BWD size seems to solve the issue! Perhaps the default value for I_MPI_SHM_CELL_BWD_SIZE changed from 2021.2 to 2021.4? Thanks for the help.

0 Kudos
SantoshY_Intel
Moderator
3,994 Views

HI,


We have redirected your issue to the concerned team. They are looking into your issue. Your issue will be fixed in the Intel OneAPI future release. We will keep you updated once the issue is fixed.


Thanks & Regards,

Santosh


0 Kudos
JEROME_B_Intel1
Employee
3,714 Views

I'm encountering exactly the same issue while trying to build hdf5 in the parallel configuration. Specifically, it builds without error, but when I run "make check", to run tests, it fails in the test called t_bigio;

Read Testing Dataset1 by COL
Assertion failed in file ../../src/mpid/ch4/shm/posix/eager/include/intel_transport_send.h at line 568: actual_pack_bytes == frame_sz
/opt/intel/oneapi/mpi/2021.4.0//lib/release/libmpi.so.12(MPL_backtrace_show+0x1c) [0x7f08f252dc8c]
/opt/intel/oneapi/mpi/2021.4.0//lib/release/libmpi.so.12(MPIR_Assert_fail+0x21) [0x7f08f1fa9fe1]

....

 

I am wondering what the status of this bug is.

0 Kudos
stefan-maxar
Novice
3,462 Views
0 Kudos
Kevin_McGrattan
3,676 Views

I believe that I also have this same issue. I have a large computational fluids code that runs a case using 2 MPI processes. This case worked for 2021.2, but not with 2021.4 or 2022.1. 

 

The problem is that an ISEND/IRECV times out after about 30000 successful exchanges. Using ofi on one or two nodes works, but shm fails. 

0 Kudos
SantoshY_Intel
Moderator
3,448 Views

Hi,


As we mentioned earlier, we have redirected this issue to the concerned development team. They are still working on your issue. Your issue will be fixed soon in the Intel OneAPI future release.


We will give an update post fixing of the issue.


Thanks & Regards,

Santosh



0 Kudos
SantoshY_Intel
Moderator
3,110 Views

Hi,


Thank you for your patience. The issue raised by you has been fixed in Intel MPI 2021.6 version(HPC Toolkit 2022.2). Please download and let us know if this resolves your issue.


Thanks & Regards,

Santosh


0 Kudos
Kevin_McGrattan
3,086 Views

I am not the original poster, but my issues have been resolved. Thanks.

0 Kudos
SantoshY_Intel
Moderator
3,052 Views

Hi,


We assume that your issue is resolved. If you need any additional information, please post a new question as this thread will no longer be monitored by Intel.


Thanks & Regards,

Santosh


0 Kudos
stefan-maxar
Novice
2,892 Views

Confirming that this has been resolved in Intel oneAPI 2022.2 with Intel MPI 2021.6. Thanks!

0 Kudos
Reply