Intel® MPI Library
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.
2194 Discussions

A question about Intel mpirun on multi-GPU platforms

Jervie
Novice
313 Views

Hello,

 

I'm trying to run Neko compiled with Intel OneAPI MPI 2021.7.0 on multiple Nvidia A100 GPUs. As Neko assumes one device per mpi rank, and this has to be assigned from the environment, I wonder if there are any environment variables could do that.

 

Actually, I tried I_MPI_PIN_PROCESSOR_LIST and I_MPI_GPU_MAPPING like below, which didn't work.

export CUDA_VISIBLE_DEVICES=0,1
export I_MPI_PIN_PROCESSOR_LIST=0:0,1:1
#export I_MPI_GPU_MAPPING=1:0,2:1
mpirun -np 2 ./neko tgv.case

0 Kudos
1 Reply
Jervie
Novice
296 Views

Hi, I'd like to add more to my question.

Neko is accelerated by CUDA. I just know that Intel MPI GPU pinning is not yet supported by CUDA backend, which is shown on this website GPU Pinning (intel.com)

Therefore, in my case, I cannot run multiple processes on multiple GPUs with one device per mpi rank using Intel MPI, right?

0 Kudos
Reply