Intel® MPI Library
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.

Trace Collector + Fortran 2008

Nick__Fabian
Beginner
577 Views

Hi,

I have observed that when trying to trace the following program with mpiexec -trace everything work fine
as long as I stick with "use mpi". If I change that to "use mpi_f08" I do not get a tracefile.
The reason I'm interested in using mpi_f08 is because I have an application to trace that uses
the shared memory MPI model and it seems that the call to

MPI_Comm_split_type

that is used below is only possible with the mpi_f08 module, right?

Any hints on why I cannot trace that program when using "use mpi_f08"?

Some extra Info:

$ mpiifort -o shm shm.f90
$ mpiifort --version
ifort (IFORT) 18.0.2 20180210

$ mpiexec -trace -np 4 shm

 

 

program nicks_program

   ! use mpi_f08
   use mpi 

   implicit none

   integer :: wrank, wsize, sm_rank, sm_size, ierr, send
   type(MPI_COMM) :: MPI_COMM_SHARED 

   call MPI_Init(ierr)
   call MPI_comm_rank(MPI_COMM_WORLD, wrank, ierr)
   call MPI_comm_size(MPI_COMM_WORLD, wsize, ierr)

   ! call MPI_Comm_split_type(MPI_COMM_WORLD, MPI_COMM_TYPE_SHARED, 0, MPI_INFO_NULL, MPI_COMM_SHARED, ierr)
   send = wrank


   call MPI_Bcast( send, 1, MPI_INTEGER, 0, MPI_COMM_WORLD, ierr )
   ! call MPI_Bcast( send, 1, MPI_INTEGER, 0, MPI_COMM_SHARED, ierr )

   write(*,*) 'send = ', send
   write(*,*) 'ierr = ', ierr

   call MPI_Finalize(ierr)
end

 

 

 

0 Kudos
1 Reply
Nick__Fabian
Beginner
577 Views

Solved by using "use pmpi_f08" instead of "use mpi_f08".

0 Kudos
Reply