Intel® MPI Library
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.
2263 Discussions

MPI linking with LLVM based compilers

FCLC
Beginner
3,792 Views

Hi All,

 

I'm currently trying to build the PETSC library with the Intel OneAPI HPC package, and was hoping to use the more modern LLVM based compilers (specifically ICX, ICPX and IFX)

The "classical" way of using MPI with the intel compilers was the provided wrappers (mpiicc, mpiicpc and mpiifort) but those do not seem to be around for the newer compilers.

After having found this post https://community.intel.com/t5/Intel-oneAPI-HPC-Toolkit/Intel-MPI-for-icx/m-p/1439211#M10147 and following the relevant documentation specified here: https://www.intel.com/content/www/us/en/docs/mpi-library/developer-reference-linux/2021-8/compilation-command-options.html#compilation-command-options_GUID-7D5FF697-C49D-4D40-92C0-8FEBE4C1BFF5
The command failed during build.

Perhaps it's a bug in the MPI wrapper or something else all together, but the above doesn't seem to work with the 2023.1 versions of the compilers.

specifically tried with
 icx --version: 
Intel(R) oneAPI DPC++/C++ Compiler 2023.1.0 (2023.1.0.20230320)
and
mpiicc --version: icc (ICC) 2021.9.0 20230302


I'm documenting the overall project in this thread: https://twitter.com/FelixCLC_/status/1668344146401300481

Cheers and thanks,

 

Felix

0 Kudos
1 Solution
RabiyaSK_Intel
Employee
3,654 Views

Hi,

 

Thank you for your patience. Could you please try building PETSc with this command:

 

./configure --with-debugging=0 --with-cc='mpiicc -cc=icx' --with-cxx='mpiicpc -cxx=icpx' --with-fc='mpiifort -fc=ifx' --with-debugging=0 COPTFLAGS='-diag-disable=10441 -O3 -march=native ' CXXOPTFLAGS='-O3 -march=native' FOPTFLAGS='-O3 -march=native' --prefix=/home/felix/petsc4foam/petsc_build/ --with-precision=single

 

 

When using the environment variables it is probably best to export them, the environment variables are only set for the exact following command and have to be specified for every single other command. 

For example we could build/make with 

I_MPI_CC=icx I_MPI_CXX=icpx I_MPI_F90=ifx make 

 

We were able to build PETSc library successfully with Intel LLVM based compilers (icx, icpx, ifx) with the given commands. Please refer to the log file for more information.

 

Thanks & Regards,

Shaik Rabiya

 

View solution in original post

0 Kudos
6 Replies
RabiyaSK_Intel
Employee
3,764 Views

Hi,


Thanks for posting in Intel Communities.


Could you please post the entire information of your query in the forum as it will be useful for other users as well.


Could you also please provide these following details:

1. The Operating System & Processor details

2. Detailed steps to reproduce your issue


>>>The command failed during build.

Could you please elaborate where you are facing the issue.


Thanks & Regards,

Shaik Rabiya


0 Kudos
FCLC
Beginner
3,701 Views

Hi Shaik, apologies for the delay.

Platform is Linux, specifically Pop!_OS 22.04 LTS, which is a downstream version of ubuntu 22.04LTS that follows a rolling release cadence.

(uname -a: `uname -a
Linux pop-os 6.3.0-local-fclc #3 SMP PREEMPT_DYNAMIC Mon May 15 23:00:23 EDT 2023 x86_64 x86_64 x86_64 GNU/Linux`)

Kernel is upstream kernel version 6.3.0

Processor is a Alderlake i7-12700K with pcores only being used as a development platform for the GoldenCove Micro-architecture as found in 4th generation Xeon Sapphire-Rapids and the latest W790 Sapphire rapids based Workstation/HEDT processors.

The query/issue in question is the linking to MPI libraries with the latest LLVM based intel compilers, including ICX, ICPX and IFX for C, C++ and Fortran respectively.

 

The usage/application is with the PETSC HPC library developed by Argonne National Lab Leadership Computing Facitlity, the University of Chicago and other FOSS/Open source contributors.

In this case I've had to build with the following flags: `./configure --download-viennacl=1 --with-debugging=0 COPTFLAGS='-O3 -march=sapphirerapids -mno-amx-tile -mno-amx-int8 -mno-amx-bf16' CXXOPTFLAGS='-O3 -march=sapphirerapids -mno-amx-tile -mno-amx-int8 -mno-amx-bf16' FOPTFLAGS='-O3 -march=sapphirerapids -mno-amx-tile -mno-amx-int8 -mno-amx-bf16' --with-blaslapack-dir=$MKLROOT --with-mpi-dir=/opt/intel/oneapi/mpi/latest/ --prefix=/home/felix/petsc4foam/petsc_build/ --with-precision=single` due to the workarounds listed in https://www.intel.com/content/www/us/en/docs/mpi-library/developer-reference-linux/2021-8/compilation-command-options.html#GUID-7D5FF697-C49D-4D40-92C0-8FEBE4C1BFF5 not being usable/functional.

Replication should be somewhat trivial, but for the avoidance of doubt I've recorded a video using the following simplified command as an example: ./configure --with-debugging=0 --with-cc=mpiicc --with-cxx=mpiicpc --with-fc=mpiifort --with-debugging=0 COPTFLAGS='-diag-disable=10441 -O3 -march=native -cc=icx' CXXOPTFLAGS='-O3 -march=native -cxx=icpx' FOPTFLAGS='-O3 -march=native -fc=ifx' --prefix=/home/felix/petsc4foam/petsc_build/ --with-precision=single

I'll note that specifically the issue is with the C and CPP compilers error'ing out during configuration, but on the Fortran side, while the `mpiifort -fc=ifx` flag is accepted, the compiler is still detected as ifort

Video available here: https://www.youtube.com/watch?v=2oj7lC9cemw


The "ideal" build would be using SYCL, but those are not yet considered production ready for PETSC.

In the meantime, the closest equivalent is ~= `./configure --download-viennacl=1 --with-debugging=0 COPTFLAGS='-O3 -march=sapphirerapids -mno-amx-tile -mno-amx-int8 -mno-amx-bf16' CXXOPTFLAGS='-O3 -march=sapphirerapids -mno-amx-tile -mno-amx-int8 -mno-amx-bf16' FOPTFLAGS='-O3 -march=sapphirerapids -mno-amx-tile -mno-amx-int8 -mno-amx-bf16' --with-blaslapack-dir=$MKLROOT --with-mpi-dir=/opt/intel/oneapi/mpi/latest/ --prefix=/home/felix/petsc4foam/petsc_build/ --with-precision=single`

0 Kudos
RabiyaSK_Intel
Employee
3,713 Views

Hi,

 

We haven't heard back from you. Could you please provide the requested details so we could reproduce your issue from our end.

 

Thanks & Regards,

Shaik Rabiya

 

0 Kudos
RabiyaSK_Intel
Employee
3,655 Views

Hi,

 

Thank you for your patience. Could you please try building PETSc with this command:

 

./configure --with-debugging=0 --with-cc='mpiicc -cc=icx' --with-cxx='mpiicpc -cxx=icpx' --with-fc='mpiifort -fc=ifx' --with-debugging=0 COPTFLAGS='-diag-disable=10441 -O3 -march=native ' CXXOPTFLAGS='-O3 -march=native' FOPTFLAGS='-O3 -march=native' --prefix=/home/felix/petsc4foam/petsc_build/ --with-precision=single

 

 

When using the environment variables it is probably best to export them, the environment variables are only set for the exact following command and have to be specified for every single other command. 

For example we could build/make with 

I_MPI_CC=icx I_MPI_CXX=icpx I_MPI_F90=ifx make 

 

We were able to build PETSc library successfully with Intel LLVM based compilers (icx, icpx, ifx) with the given commands. Please refer to the log file for more information.

 

Thanks & Regards,

Shaik Rabiya

 

0 Kudos
FCLC
Beginner
3,625 Views

Yup, above was able to solve the issue!

Thanks for the follow up.

 

-Felix

0 Kudos
RabiyaSK_Intel
Employee
3,605 Views

Hi,


Thanks for accepting our solution. If you need any additional information, you can post a new question on community as this thread will no longer be monitored by Intel.


Thanks & Regards,

Shaik Rabiya


0 Kudos
Reply