- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The following is the stack when running with 2 cores. Running with 1 core the code is working.
The following are the env variables set
[0] MPI startup(): Run 'pmi_process_mapping' nodemap algorithm
[0] MPI startup(): Intel(R) MPI Library, Version 2021.3.1 Build 20210719 (id: 48425b416)
[0] MPI startup(): Copyright (C) 2003-2021 Intel Corporation. All rights reserved.
[0] MPI startup(): library kind: release
impi_mbind_local(): mbind(p=0x7f497eef8000, size=1073741824) error=1 "Operation not permitted"
impi_mbind_local(): mbind(p=0x7f07374fd000, size=1073741824) error=1 "Operation not permitted"
Assertion failed in file ../../src/mpid/ch4/shm/posix/eager/include/intel_transport.c at line 354: absolute_l3_cache_id <= max
/opt/intel/oneapi/mpi/2021.3.1//lib/release/libmpi.so.12(MPL_backtrace_show+0x1c) [0x7f844d5776dc]
/opt/intel/oneapi/mpi/2021.3.1//lib/release/libmpi.so.12(MPIR_Assert_fail+0x21) [0x7f844ce40171]
/opt/intel/oneapi/mpi/2021.3.1//lib/release/libmpi.so.12(+0x48168d) [0x7f844cdbd68d]
/opt/intel/oneapi/mpi/2021.3.1//lib/release/libmpi.so.12(+0x9da9a2) [0x7f844d3169a2]
/opt/intel/oneapi/mpi/2021.3.1//lib/release/libmpi.so.12(+0x606b3b) [0x7f844cf42b3b]
/opt/intel/oneapi/mpi/2021.3.1//lib/release/libmpi.so.12(+0x6f5a26) [0x7f844d031a26]
/opt/intel/oneapi/mpi/2021.3.1//lib/release/libmpi.so.12(+0x1cb93c) [0x7f844cb0793c]
/opt/intel/oneapi/mpi/2021.3.1//lib/release/libmpi.so.12(PMPI_Init_thread+0xe0) [0x7f844cda6760]
/root/miniconda3/lib/python3.10/site-packages/mpi4py/MPI.cpython-310-x86_64-linux-gnu.so(+0x2fbc3) [0x7f844dd62bc3]
python(PyModule_ExecDef+0x70) [0x5976e0]
python() [0x598a69]
python() [0x4fdc1b]
python(_PyEval_EvalFrameDefault+0x5a6f) [0x4f443f]
python(_PyFunction_Vectorcall+0x6f) [0x4fe5ef]
python(_PyEval_EvalFrameDefault+0x4b4e) [0x4f351e]
python(_PyFunction_Vectorcall+0x6f) [0x4fe5ef]
python(_PyEval_EvalFrameDefault+0x731) [0x4ef101]
python(_PyFunction_Vectorcall+0x6f) [0x4fe5ef]
python(_PyEval_EvalFrameDefault+0x31f) [0x4eecef]
python(_PyFunction_Vectorcall+0x6f) [0x4fe5ef]
python(_PyEval_EvalFrameDefault+0x31f) [0x4eecef]
python(_PyFunction_Vectorcall+0x6f) [0x4fe5ef]
python() [0x4fddb4]
python(_PyObject_CallMethodIdObjArgs+0x137) [0x50c987]
python(PyImport_ImportModuleLevelObject+0x537) [0x50bcf7]
python() [0x517854]
python() [0x4fe1a7]
python(PyObject_Call+0x209) [0x50a8b9]
python(_PyEval_EvalFrameDefault+0x5a6f) [0x4f443f]
python(_PyFunction_Vectorcall+0x6f) [0x4fe5ef]
python(_PyEval_EvalFrameDefault+0x31f) [0x4eecef]
python(_PyFunction_Vectorcall+0x6f) [0x4fe5ef]
Abort(1) on node 0: Internal error
Assertion failed in file ../../src/mpid/ch4/shm/posix/eager/include/intel_transport.c at line 354: absolute_l3_cache_id <= max
/opt/intel/oneapi/mpi/2021.3.1//lib/release/libmpi.so.12(MPL_backtrace_show+0x1c) [0x7f38d72106dc]
/opt/intel/oneapi/mpi/2021.3.1//lib/release/libmpi.so.12(MPIR_Assert_fail+0x21) [0x7f38d6ad9171]
/opt/intel/oneapi/mpi/2021.3.1//lib/release/libmpi.so.12(+0x48168d) [0x7f38d6a5668d]
/opt/intel/oneapi/mpi/2021.3.1//lib/release/libmpi.so.12(+0x9da902) [0x7f38d6faf902]
/opt/intel/oneapi/mpi/2021.3.1//lib/release/libmpi.so.12(+0x606b3b) [0x7f38d6bdbb3b]
/opt/intel/oneapi/mpi/2021.3.1//lib/release/libmpi.so.12(+0x6f5a26) [0x7f38d6ccaa26]
/opt/intel/oneapi/mpi/2021.3.1//lib/release/libmpi.so.12(+0x1cb93c) [0x7f38d67a093c]
/opt/intel/oneapi/mpi/2021.3.1//lib/release/libmpi.so.12(PMPI_Init_thread+0xe0) [0x7f38d6a3f760]
/root/miniconda3/lib/python3.10/site-packages/mpi4py/MPI.cpython-310-x86_64-linux-gnu.so(+0x2fbc3) [0x7f38d79fbbc3]
python(PyModule_ExecDef+0x70) [0x5976e0]
python() [0x598a69]
python() [0x4fdc1b]
python(_PyEval_EvalFrameDefault+0x5a6f) [0x4f443f]
python(_PyFunction_Vectorcall+0x6f) [0x4fe5ef]
python(_PyEval_EvalFrameDefault+0x4b4e) [0x4f351e]
python(_PyFunction_Vectorcall+0x6f) [0x4fe5ef]
python(_PyEval_EvalFrameDefault+0x731) [0x4ef101]
python(_PyFunction_Vectorcall+0x6f) [0x4fe5ef]
python(_PyEval_EvalFrameDefault+0x31f) [0x4eecef]
python(_PyFunction_Vectorcall+0x6f) [0x4fe5ef]
python(_PyEval_EvalFrameDefault+0x31f) [0x4eecef]
python(_PyFunction_Vectorcall+0x6f) [0x4fe5ef]
python() [0x4fddb4]
python(_PyObject_CallMethodIdObjArgs+0x137) [0x50c987]
python(PyImport_ImportModuleLevelObject+0x537) [0x50bcf7]
python() [0x517854]
python() [0x4fe1a7]
python(PyObject_Call+0x209) [0x50a8b9]
python(_PyEval_EvalFrameDefault+0x5a6f) [0x4f443f]
python(_PyFunction_Vectorcall+0x6f) [0x4fe5ef]
python(_PyEval_EvalFrameDefault+0x31f) [0x4eecef]
python(_PyFunction_Vectorcall+0x6f) [0x4fe5ef]
Abort(1) on node 1: Internal error
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
Thank you for posting in intel community.
Could you please provide the following details so that we could check whether they are of supported type:
1. OS Details
2. CPU Details
Could you please check with I_MPI_FABRICS= ofi instead of 'shm' because it is used for single node.
NOTE: Could you please try to check with the latest version of Intel MPI(2021.10) and let us know if the issue persists , If so please provide us the sample producer and steps to reproduce at our end.
Thanks And Regards,
Aishwarya
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
We haven't heard back from you, could you please provide details asked in previous response and let us know whether your issue is resolved or not with the latest version(2021.10)?. If yes, make sure to accept this as a solution.
Thank you and best regards,
Aishwarya
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Sorry for the delay. The issue has been resolved. It was due to the architecture not supporting Intel MPI.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
Glad to know that your issue is resolved. If you need any additional information, please post a new question as this thread will no longer be monitored by Intel.
Thanks And Regards,
Aishwarya
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
For anyone encountering this issue with an older version of IMPI, the suggestion above of setting I_MPI_FABRICS=ofi worked for me.

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page