Intel® MPI Library
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.
2199 Discussions

MPI_f08 with polymorphic argument CLASS(*)

hakostra1
New Contributor II
1,348 Views

Over in the Fortran forum here I reported a bug last summer. It was originally about calling some MPI_f08 functions, but ifx gave compilation errors. @Steve_Lionel helped me boil it down to a code snippet that did not depend on MPI, the bug was fixed in version 2024.1 of the Intel Fortran compiler.

However, my original MPI code still does not compile. I have now boiled it down to the following example:

 

SUBROUTINE test(baz, dtype)
    USE MPI_f08
    IMPLICIT NONE (type, external)

    ! Subroutine arguments
    CLASS(*) :: baz
    TYPE(MPI_Datatype) :: dtype

    ! Local variables
    TYPE(MPI_Request) :: recvreq

    CALL MPI_Irecv(baz, 1, dtype, 0, 0, MPI_COMM_SELF, recvreq)
END SUBROUTINE test

 

Compiling this with mpiifx gives:

 

example-2.F90(12): error #8769: If the actual argument is unlimited polymorphic, the corresponding dummy argument must also be unlimited polymorphic.   [BAZ]
    CALL MPI_Irecv(baz, 1, dtype, 0, 0, MPI_COMM_SELF, recvreq)
-------------------^
compilation aborted for example-2.F90 (code 1)

 

The version of the compiler and MPI library is:

 

$ mpiifx --version
ifx (IFX) 2024.1.0 20240308
Copyright (C) 1985-2024 Intel Corporation. All rights reserved.

$ mpirun --version
Intel(R) MPI Library for Linux* OS, Version 2021.12 Build 20240213 (id: 4f55822)
Copyright 2003-2024, Intel Corporation.

 

Since the compiler bug is apparently solved in the compiler version I use now, I wonder if there could be a bug somewhere in the MPI_f08 bindings? Please refer to the original post in the Fortran part of the forum for more information on the original problem.

Both GFortran + OpenMPI and NAG Fortran compiler + MPICH compile the above example code just fine without any errors or warnings.

Thanks in advance for all help.

0 Kudos
1 Solution
TobiasK
Moderator
1,020 Views

@hakostra1 

We will fix it with the next release.
In case you cannot modify the installation folder, you can of course just recompile the module files in a folder of your choice and add that folder to your compile / link line.

View solution in original post

0 Kudos
10 Replies
TobiasK
Moderator
1,166 Views

@hakostra1


we made some changes in the F08 bindings recently, let me check this.


0 Kudos
TobiasK
Moderator
1,132 Views

@hakostra1 
thanks for reporting this. Are you using Windows or Linux?

0 Kudos
hakostra1
New Contributor II
1,123 Views

Thanks for looking into this. I'm using Linux.

0 Kudos
TobiasK
Moderator
1,094 Views

@hakostra1 

for Linux please try (after setting up the oneapi 2024.1 environment)

cd $I_MPI_ROOT/opt/mpi/binding
tar -xf intel-mpi-binding-kit.tar.gz
cd f08
make MPI_INST=${I_MPI_ROOT} F90=ifx NAME=ifx
cd include/ifx
mkdir ${I_MPI_ROOT}/include/mpi/back
cp ${I_MPI_ROOT}/include/mpi/* ${I_MPI_ROOT}/include/mpi/back
cp * ${I_MPI_ROOT}/include/mpi/

 

and try your example again.

 

Best

Tobias

0 Kudos
hakostra1
New Contributor II
1,041 Views

Yes, the example compiles now. Thanks. I guess this means there is a problem with the included MPI_f08 module files, then?

Is there any plans to update/fix these in a future release? Because this is a quite tedious fix to apply to every computer and workstation where I want to compile my software. In many cases I do not even have the necessary privileges to conduct this fix...

Anyways, thanks for the help so far!

0 Kudos
TobiasK
Moderator
1,021 Views

@hakostra1 

We will fix it with the next release.
In case you cannot modify the installation folder, you can of course just recompile the module files in a folder of your choice and add that folder to your compile / link line.

0 Kudos
hakostra1
New Contributor II
1,019 Views

Great to hear it will be fixed! That makes life so much easier for everyone!

 

Thanks a lot for looking into this!

0 Kudos
hakostra1
New Contributor II
353 Views

For reference: I just tried the 2024.2 oneAPI packages that recently came out, and this problem does not appear to be solved in this release:

$ mpiifx example-2.F90 
example-2.F90(13): error #8769: If the actual argument is unlimited polymorphic, the corresponding dummy argument must also be unlimited polymorphic.   [BAZ]
    CALL MPI_Irecv(baz, 1, dtype, 0, 0, MPI_COMM_SELF, recvreq, ierr)
-------------------^
compilation aborted for example-2.F90 (code 1)

$ mpiifx --version
ifx (IFX) 2024.2.0 20240602
Copyright (C) 1985-2024 Intel Corporation. All rights reserved.

$ mpirun --version
Intel(R) MPI Library for Linux* OS, Version 2021.13 Build 20240515 (id: df72937)
Copyright 2003-2024, Intel Corporation.

 

0 Kudos
TobiasK
Moderator
347 Views

unfortunately, it was decided that backwards combability is more important than the fix for this problem, so the solution is still to rebuild the bindings with 2024.1 or newer to get rid of the fix.

0 Kudos
hakostra1
New Contributor II
346 Views

Ok, thanks for the info. Is there any timeline at all (e.g. 2025.x ???) for fixing this? Imho. this is a quite serious bug that severely limit the usability of the Intel MPI library for modern Fortran applications.

Reply