Intel® Fortran Compiler
Build applications that can scale for the future with optimized code designed for Intel® Xeon® and compatible processors.

help with mpi

roddur
Beginner
338 Views

Dear Friends,

I am facing a problem with mpi-parallelising my code, the attached main.f90.

As you can see, there is a part:

[fortran]!Initialize and check system for MPI
call MPI_INIT(ierr)
call MPI_COMM_RANK(MPI_COMM_WORLD,myid,ierr)
call MPI_COMM_SIZE(MPI_COMM_WORLD,numprocs,ierr)
write(*,*) "node",myid
write(*,*) "numprocs",numprocs
!---------------------------------------!

!----------loop for spin----------------!
!         loop 1=>up spin;              !
!         loop 2=>down spin             !
!---------------------------------------!

lspin: do nsp=1,spn

......

ltype:  do ityp=1,ntype. [/fortran]

and so on,

I want to run it in 32 proc(4 node with 8proc/node) with the hope that each node will run a seperate combination of (lspin,ltype)

But as you expected, life never goes as you want. can you plz let me know where it betrayed me?

I have the full code attached.

the output is:

[bash]node           0
numprocs           4

WORKING FOR SPIN UP
Reading POTENTIAL PARAMETERS from  POTPAR_A  
node           2
numprocs           4
WORKING FOR SPIN UP
Reading POTENTIAL PARAMETERS from  POTPAR_A  
node           1
numprocs           4
WORKING FOR SPIN UP
Reading POTENTIAL PARAMETERS from  POTPAR_A  
node           3
numprocs           4
WORKING FOR SPIN UP
Reading POTENTIAL PARAMETERS from  POTPAR_A  
Reading POTENTIAL PARAMETERS from  POTPAR_B  
Reading POTENTIAL PARAMETERS from  POTPAR_B  
Reading POTENTIAL PARAMETERS from  POTPAR_B  
Reading POTENTIAL PARAMETERS from  POTPAR_B  
WORKING FOR ATOM 1
INFO_FCC
WORKING FOR ATOM 1
INFO_FCC
WORKING FOR ATOM 1
INFO_FCC
WORKING FOR ATOM 1[/bash]

but As i told i expected (spn=1,atom=1),(spn=1,atom=2),(spn=2,atom=1),(spn=2,atom=2) should run in four different node.

How to accomplish this?

0 Kudos
1 Reply
Tim_Gallagher
New Contributor II
338 Views

OpenMPI just means that it will start the code on multiple processors and allow the programmer to communicate between the processes. It does not provide a way to operate on different data on it's own. To do that, the programmer has to divide the work correctly.

For example, if you just had:

DO i = 1, 10

CALL calcStuff(i)

END DO

with OpenMPI, all processors would call calcStuff 10 times and thus duplicate work. Let's say you wanted each processor to do it's own number, and you have 10 procs. Then you would just put:

CALL calcStuff(myID)

Whatever code you write is executed by all the processes. So if you want to divide work among processors, it needs to be done by the programmer based on the processor ID (or some other way of dividing work, like having each processor read a unique file containing the data it needs).

Tim

0 Kudos
Reply