Intel® oneAPI HPC Toolkit
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.
2056 Discussions

assertion failed intel_transport_init.h

RHRK_TUKL
Beginner
1,566 Views

OneAPI in version 2021.3

SLURM

mpiexec.hydra *like mpiexec, mpirun produces

Assertion failed in file ../../src/mpid/ch4/shm/posix/eager/include/intel_transport_init.h at line 1
057: llc_id >= 0

0 Kudos
9 Replies
ShivaniK_Intel
Moderator
1,538 Views

Hi,


Thanks for reaching out to us.


Could you please provide the sample reproducer code and the steps to reproduce the issue at our end?


Also, Could you please provide us your system environment details(OS version)?


Thanks & Regards

Shivani


0 Kudos
RHRK_TUKL
Beginner
1,532 Views

#SBATCH --nodes=2 
#SBATCH --ntasks=4
#SBATCH --cpus-per-task=2
#SBATCH --ntasks-per-node=2

mpiicc -o oneA test_affinity.c

export I_MPI_DEBUG=100

mpiexec.hydra ./oneA

 

OS: Linux 3.10.0-1160.42.2.el7.x86_64

0 Kudos
ShivaniK_Intel
Moderator
1,516 Views

Hi,


Thanks for providing the details.


We can see that you have not attached the sample reproducer code. Could you please provide the sample reproducer code to investigate more on the issue?


Thanks & Regards

Shivani


0 Kudos
RHRK_TUKL
Beginner
1,503 Views

Ok, it's that simple. To my opinion, MPI_Init() is already not passed.

0 Kudos
ShivaniK_Intel
Moderator
1,433 Views

Hi,


Thanks for providing the requested details. We have tried on Linux CentOS version 8 and were unable to reproduce the issue at our end.


Could you please provide a complete error log to investigate more on your issue?


Thanks & Regards

Shivani 


0 Kudos
RHRK_TUKL
Beginner
1,313 Views

Hi,

installed the new release 2021.4.

No everything seems to work like it should.

Case may be closed.

Regards,

 Josef Schüle

0 Kudos
RHRK_TUKL
Beginner
1,416 Views

Hello,

here is the slurm-Error file with I_MPI_DEBUG=100 after calling

mpiexec.hydra ./exe

0 Kudos
ShivaniK_Intel
Moderator
1,368 Views

Hi,


Thanks for providing the slurm-error file .Could you also please provide output file too which has the details about I_MPI_DEBUG log?


Also, let us know the libfabric provider you have been using?


Thanks & Regards

Shivani



0 Kudos
ShivaniK_Intel
Moderator
1,251 Views

Hi,


Glad to know that your issue is resolved. If you need any additional information, please post a new question as this thread will no longer be monitored by Intel.


Thanks & Regards

Shivani


0 Kudos
Reply