Intel® MPI Library
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.
Ankündigungen
FPGA community forums and blogs on community.intel.com are migrating to the new Altera Community and are read-only. For urgent support needs during this transition, please visit the FPGA Design Resources page or contact an Altera Authorized Distributor.
2275 Diskussionen

Bug in intelMPI on AMD processors

MichalKrupicka
Einsteiger
4.136Aufrufe

The problem occurs in pamcrash and starccm with intelmpi (2018, 2019, 2021) on AMD processor (AMD EPYC 7763 64-Core Processor, 2 sockets, 1 node has 128 cores) on SLES 15 SP4.

On SLES 15 SP3 there was one workaroud with LD_PRELOADed strtok library. But when SP4 came, it doesn’t work.

The computation hangs everytime when the count of mpi ranks is higher than 256.

0 Kudos
7 Antworten
ShivaniK_Intel
Moderator
4.110Aufrufe

Hi,


Thanks for posting in the Intel forums.


Could you please let us know whether you are facing a similar issue with the Intel processor?


We can only offer direct support for Intel hardware platforms that the Intel® oneAPI product supports. Intel provides instructions on how to compile oneAPI code for both CPU and a wide range of GPU accelerators.


https://intel.github.io/llvm-docs/GetStartedGuide.html


Thanks & Regards

Shivani


MichalKrupicka
Einsteiger
4.104Aufrufe

Hi Shivani,

 

 thanks for the replay. We do not face this issue on intel processors.

Does it mean that intel mpi is not prepared for other processors than intel`s? Is intelmpi supposed to be run only on intel processors?

 

Thanks & regards,

Michal

ShivaniK_Intel
Moderator
4.013Aufrufe

Hi,


Could you please provide us with the sample reproducer code and steps to reproduce the issue at our end?


Thanks & Regards

Shivani


ShivaniK_Intel
Moderator
3.957Aufrufe

Hi,


Could you please provide us with the details of the interconnect used? Also please let us know which Intel MPI variables you set to change the fabric.


Thanks & Regards

Shivani



ShivaniK_Intel
Moderator
3.884Aufrufe

Hi,


As we did not hear back from you could please respond to my previous post?


Could you please let us know if this issue also happens with IMB-MPI1 benchmarks included in IMPI distribution?

$ mpirun -n 256 IMB-MPI1 sendrecv


Could you also please let us know whether you are using 256+ ranks on a single node with 128 cores or if this happens when using more than a single node?


Thanks & Regards

Shivani



ShivaniK_Intel
Moderator
3.821Aufrufe

Hi,


As we did not hear back from you could you please respond to my previous post?


Thanks & Regards

Shivani


ShivaniK_Intel
Moderator
3.755Aufrufe

Hi,


We have not heard back from you. This thread will no longer be monitored by Intel. If you need further assistance, please post a new question.


Thanks & Regards

Shivani


Antworten