Intel® MPI Library
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.
2153 Discussions

SENDRECV + MPI_TYPE_CREATE_STRUCT

diedro
Beginner
734 Views

Dear all,

I have some basic question about MPI about  SENDRECV and MPI_TYPE_CREATE_STRUCT.

First: MPI_TYPE_CREATE_STRUCT. As suggested by James Tullos, whem I have a data type as:

type particle
 integer                 :: rx
 integer                 :: ry
 real                    :: QQ(4)
end type particle

I can create a MPI data type as follows:

type(particle) dummy ! Used for calculation of displacement
integer lengths(2), types(2), ierr
integer(kind=MPI_ADDRESS_KIND) displacements
integer mpi_particle_type
types(1)=MPI_INTEGER
types(2)=MPI_REAL
lengths(1)=2
lengths(2)=4
displacements(1)=0
displacements(2)=sizeof(dummy%rx)+sizeof(dummy%ry)
call MPI_TYPE_CREATE_STRUCT(2,lengths,displacements,types,mpi_particle_type,ierr)
call MPI_TYPE_COMMIT(mpi_particle_type,ierr)

the question is: why Do I use INTEGER MPI_PARTICLE _TYPE and not REAL MPI_PARTICLE_TYPE.

The second question is: How can I send for example 100 MPI_PARTICLE _TYPE varibles to another processor with SENDRECV. Do I have to create a vector MPI_PARTICLE _TYPE:

MPI_PARTICLE _TYPE :: VECTOR(100)

Am I right?

Really thanks to everyone. 

Diego

0 Kudos
8 Replies
James_T_Intel
Moderator
734 Views

For the first question, I answered in the other thread, I'll copy it here:

In Fortran, there are two ways to define MPI objects.  You can use integer, which creates a handle to the real datatype.  Or, as of MPI-3, the C bindings were made available to Fortran, so you could instead use TYPE(MPI_Datatype).  I use integer out of familiarity.

For the second, to send an array of 100, you don't define an array of the datatype.  This is just a reference to how the data is structured, not the actual data.  Instead, define an array of 100 particles and use a count of 100 in your MPI_Sendrecv call.

[plain]type(particle) :: p_send(100),p_recv(100)

...

call MPI_Sendrecv(p_send,100,MPI_PARTICLE_TYPE,destination,tag,p_recv,100,MPI_PARTICLE_TYPE,sender,tag,MPI_COMM_WORLD,MPI_STATUS_IGNORE,ierr)[/plain]

0 Kudos
diedro
Beginner
734 Views

Dear J.,

I create this little test:

 TYPES(1)=MPI_INTEGER    !We have three variables type in the new varible
     TYPES(2)=MPI_REAL2       !Integer and Real and Real
     TYPES(3)=MPI_REAL2       !Integer and Real and Real
     nBLOCKS(1)=1            !number of element in each block 
     nBLOCKS(2)=2
     nBLOCKS(3)=4
     
     DISPLACEMENTS(1)=0
     DISPLACEMENTS(2)=sizeof(dummy%ip)
     DISPLACEMENTS(3)=sizeof(dummy%RP)
     !
     CALL MPI_TYPE_CREATE_STRUCT(1,nBLOCKS,DISPLACEMENTS,TYPES,MPI_PARTICLE_TYPE,MPI%ierr)
     CALL MPI_TYPE_COMMIT(MPI_PARTICLE_TYPE,MPI%ierr)
     !
     IF(MPI%myrank==1)THEN
        DO ip=1,100
           p_send(ip)%ip=ip
           p_send(ip)%RP(:)=10.
           p_send(ip)%QQ(:)=2.
        ENDDO
     ENDIF

      CALL MPI_Sendrecv(p_send,BUFF,MPI_PARTICLE_TYPE,2,12,p_recv,BUFF,MPI_PARTICLE_TYPE,1,12,MPI_COMM_WORLD,MPI_STATUS_IGNORE,MPI%ierr)
!       
      IF(MPI%myrank==2)THEN
        WRITE(*,* )p_recv(2)%ip,p_recv(2)%RP(:)
      ENDIF

The problem is that the program never stops.

If I use the send and recv subroutine it works. 

     IF(MPI%myrank==1)THEN
        CALL MPI_SEND(P_SEND, 100, MPI_PARTICLE_TYPE, 2, 10, MPI_COMM_WORLD, MPI%ierr ) 
     ENDIF
     
     IF(MPI%myrank==2)THEN
        CALL MPI_RECV(P_RECV, 100, MPI_PARTICLE_TYPE, 1,10, MPI_COMM_WORLD, status, MPI%ierr)
        WRITE(*,* )P_RECV(2)%ip,P_RECV(2)%RP(:)
     ENDIF

Can you tell me why, please?

I think that i am missing something about MPI. I also put a barrier but nothing changes.
 
Again, thanks a lot
0 Kudos
James_T_Intel
Moderator
734 Views

What value is in BUFF?  Are you running with at least 3 ranks (MPI ranks start counting at 0).

Also, it seems like your displacements are off.  The sizeof intrinsic gives you the size of a variable, not the address.  If you're using the particle type shown in your first post, you want:

[plain]displacements(1)=0

displacements(2)=sizeof(dummy%rx)

displacements(3)=sizeof(dummy%rx)+sizeof(dummy%ry)[/plain]

0 Kudos
diedro
Beginner
734 Views

Dear,

sorry for my errors, I should be more accurate.

This is my new variable:

  TYPE tParticle
    INTEGER :: ip
    REAL    :: RP(2)
    REAL    :: QQ(4)
  END TYPE tParticle

so I have:

TYPES(1)=MPI_INTEGER    !We have three variables type in the new varible
TYPES(2)=MPI_REAL2       !Integer and Real and Real
TYPES(3)=MPI_REAL2       !Integer and Real and Real
nBLOCKS(1)=1            !number of element in each block 
nBLOCKS(2)=2
nBLOCKS(3)=4
     
 DISPLACEMENTS(1)=0
 DISPLACEMENTS(2)=sizeof(dummy%ip)
 DISPLACEMENTS(3)=sizeof(dummy%RP)
 !
 CALL MPI_TYPE_CREATE_STRUCT(1,nBLOCKS,DISPLACEMENTS,TYPES,MPI_PARTICLE_TYPE,MPI%ierr)
CALL MPI_TYPE_COMMIT(MPI_PARTICLE_TYPE,MPI%ierr)

Do I have correctly understood, at list this part?

Then I have:

IF(MPI%myrank==1)THEN
        DO ip=1,100
           p_send(ip)%ip=ip
           p_send(ip)%RP(:)=10.
           p_send(ip)%QQ(:)=2.
        ENDDO
 ENDIF

CALL MPI_Sendrecv(p_send,100,MPI_PARTICLE_TYPE,2,1,p_recv,100,MPI_PARTICLE_TYPE,1,1,MPI_COMM_WORLD,MPI_STATUS_IGNORE,ierr)

WRITE(*,* )MPI%myrank,p_recv(2)%ip,p_recv(2)%RP(:)

This is a piece of my code. It runs but never stops.

 

0 Kudos
diedro
Beginner
734 Views

Dear all,

I have replaced buff with 100. 

 

0 Kudos
diedro
Beginner
734 Views

Dear all

It seem that all processor at the same time have to use MPI_Sendrecv.In my code, I have used it only with processor number 1 and 2, this create a deadlock, because I think that all processor are trying to do that even 0 and 3, which of course can not.

Now I have anothe problem but I think that it better to create another post.

 

Thanks

0 Kudos
diedro
Beginner
734 Views

Dear all,

the problem is linked to my type variable. 

I get some error when I try to send it. The most important things is that I work with double precision, so I use -r8 when I compile.

this is my type:

  TYPE tParticle
    INTEGER :: ip
    REAL    :: RP(2)
    REAL    :: QQ(4)
  END TYPE tParticle

so I create my type with:

     !
     !We create the MPI_DATA_TYPE to send guestes particles to the other processor
     !
     TYPES(1)=MPI_INTEGER                !We have three variables type in the new varible
     TYPES(2)=MPI_DOUBLE_PRECISION       !Integer and Real and Real
     TYPES(3)=MPI_DOUBLE_PRECISION       !Integer and Real and Real
     nBLOCKS(1)=1                        !number of element in each block 
     nBLOCKS(2)=2
     nBLOCKS(3)=4
     !
     DISPLACEMENTS(1)=0
     DISPLACEMENTS(2)=sizeof(dummy%ip)
     DISPLACEMENTS(3)=sizeof(dummy%ip)+sizeof(dummy%RP(1))+sizeof(dummy%RP(2))
     
     CALL MPI_TYPE_CREATE_STRUCT(3,nBLOCKS,DISPLACEMENTS,TYPES,MPI_PARTICLE_TYPE,MPI%ierr)
     CALL MPI_TYPE_COMMIT(MPI_PARTICLE_TYPE,MPI%ierr)

do you notice some error, I have not really clear the displacement part. Can someone help me, please?

 

 
0 Kudos
James_T_Intel
Moderator
734 Views

Ok, I think the double precision could be throwing it off.  The displacements are telling MPI where to find the data in relation to the starting address.  Checking the starting address of each of the components of your defined type, the ip component is taking 8 bytes instead of 4, likely due to alignment.

I'd recommend in this case using loc to get the displacements.  This is non-standard, but fairly widely implemented, and will more precisely get the address.

[plain]displacements(1)=0

displacements(2)=loc(dummy%rp(1)) - loc(dummy%ip)

displacements(3)=loc(dummy%qq(1)) - loc(dummy%ip)[/plain]

 

0 Kudos
Reply