Intel® MPI Library
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.
2274 Discusiones

assertion failed intel_transport_init.h

RHRK_TUKL
Principiante
3.940 Vistas

OneAPI in version 2021.3

SLURM

mpiexec.hydra *like mpiexec, mpirun produces

Assertion failed in file ../../src/mpid/ch4/shm/posix/eager/include/intel_transport_init.h at line 1
057: llc_id >= 0

0 kudos
9 Respuestas
ShivaniK_Intel
Moderador
3.912 Vistas

Hi,


Thanks for reaching out to us.


Could you please provide the sample reproducer code and the steps to reproduce the issue at our end?


Also, Could you please provide us your system environment details(OS version)?


Thanks & Regards

Shivani


RHRK_TUKL
Principiante
3.906 Vistas

#SBATCH --nodes=2 
#SBATCH --ntasks=4
#SBATCH --cpus-per-task=2
#SBATCH --ntasks-per-node=2

mpiicc -o oneA test_affinity.c

export I_MPI_DEBUG=100

mpiexec.hydra ./oneA

 

OS: Linux 3.10.0-1160.42.2.el7.x86_64

ShivaniK_Intel
Moderador
3.890 Vistas

Hi,


Thanks for providing the details.


We can see that you have not attached the sample reproducer code. Could you please provide the sample reproducer code to investigate more on the issue?


Thanks & Regards

Shivani


RHRK_TUKL
Principiante
3.877 Vistas

Ok, it's that simple. To my opinion, MPI_Init() is already not passed.

ShivaniK_Intel
Moderador
3.807 Vistas

Hi,


Thanks for providing the requested details. We have tried on Linux CentOS version 8 and were unable to reproduce the issue at our end.


Could you please provide a complete error log to investigate more on your issue?


Thanks & Regards

Shivani 


RHRK_TUKL
Principiante
3.687 Vistas

Hi,

installed the new release 2021.4.

No everything seems to work like it should.

Case may be closed.

Regards,

 Josef Schüle

RHRK_TUKL
Principiante
3.790 Vistas

Hello,

here is the slurm-Error file with I_MPI_DEBUG=100 after calling

mpiexec.hydra ./exe

ShivaniK_Intel
Moderador
3.742 Vistas

Hi,


Thanks for providing the slurm-error file .Could you also please provide output file too which has the details about I_MPI_DEBUG log?


Also, let us know the libfabric provider you have been using?


Thanks & Regards

Shivani



ShivaniK_Intel
Moderador
3.625 Vistas

Hi,


Glad to know that your issue is resolved. If you need any additional information, please post a new question as this thread will no longer be monitored by Intel.


Thanks & Regards

Shivani


Responder