Intel® MPI Library
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.
2228 Discussions

Problem calling MPI with GPU arrays (use_device_ptr, use_device_addr)

caplanr
New Contributor I
366 Views

Hi,

 

I am trying to compile and run our code (github.com/predsci/pot3d) on Stampede3 on the MAX 1550 GPUs.

 

The code uses "GPU-aware" MPI calls using OpenMP target's "use_device_ptr()".

(This works with the NVIDIA compiler with the HPC-X MPI library across NVIDA GPUs).

 

On the Intel GPUs, I am getting a seg fault at the MPI calls.

 

The compiler mentions that the "use_device_ptr()" can only be used with C pointers in Fortran, so I switched them to "use_device_addr()", but that also seg faults.

 

Are GPU-aware MPI calls supported in ifx+impi (I am using 2025.0.0)?

 

If so, what do I need to do to get it to work?

 

Can you clarify the difference between use_device_ptr() and use_device_addr()?

From here, it seems the difference is not clear?

https://forums.developer.nvidia.com/t/omp-target-data-use-device-ptr-vs-use-device-addr/290317

 

Thanks!

 

 - Ron

Labels (2)
0 Kudos
1 Solution
TobiasK
Moderator
181 Views

@caplanr when passing GPU pointer to Intel MPI you have to set
export I_MPI_OFFLOAD=1
before running the application

View solution in original post

0 Kudos
2 Replies
TobiasK
Moderator
182 Views

@caplanr when passing GPU pointer to Intel MPI you have to set
export I_MPI_OFFLOAD=1
before running the application

0 Kudos
caplanr
New Contributor I
150 Views

Thanks!

 

It is now working!

Another tip for someone with the same issue:

I also had to set:

export LIBOMPTARGET_DEVICES=SUBDEVICE

in order for the MPI to see all 8 tiles of the 4 MAX 1550 GPUs.

The default was only seeing 1 GPU device (on Stampede3).

 

 - Ron

0 Kudos
Reply