Intel® Moderncode for Parallel Architectures
Support for developing parallel programming applications on Intel® Architecture.
1696 Discussions

Intel MPI issues: MPI_COMM_SIZE returns zero processes

Balasubramanian__Aru
858 Views

Hi,

I tried to build a simple MPI application using Intel MPI libraries and got in to issues at the execution stage. The subroutine MPI_COMM_SIZE returns zero number of processes, while the rank returned by MPI_COMM_RANK seems just fine. Further the message passing through the MPI_SEND and MPI_RECV does n't seem to happen, although it did not produce an error. I am currently using version 2018 and Update 3 of Intel MPI library. On the other hand it gives correct results when i use MS-MPI library components. The fact that building seems fine indicates that there is n't any issues with the functionalities. However the difference in launch tools (mpiexec) and program manager (hydra) is what i guess is causing the issue. Here is the sample code i was trying to execute.

program array 
      include 'mpif.h'

      integer   nprocs, MASTER
      parameter (nprocs = 5)
      parameter (MASTER = 0)

      integer  numtasks, taskid, ierr, dest, offset, i, tag2, source
      real*8   data(nprocs-1)
      integer  status(MPI_STATUS_SIZE)
      
! ***** Initializations *****

      call MPI_INIT(ierr)
      call MPI_COMM_SIZE(MPI_COMM_WORLD, numtasks, ierr)      
      call MPI_COMM_RANK(MPI_COMM_WORLD, taskid, ierr)

      tag2 = 1      
      write(*,*)'numtasks',numtasks
      
!***** Master task only ******
      if (taskid .eq. MASTER) then

!       Initialize array        
        do i=1, nprocs-1
          data(i) = i * 1.0          
        end do        

!       Send each task an element of the array
        do dest=1, numtasks-1          
          call MPI_SEND(data(dest), 1, MPI_DOUBLE_PRECISION, dest, &
           tag2, MPI_COMM_WORLD, ierr)          
        end do
!
!       Wait to receive results from each task
        do i=1, numtasks-1
          source = i          
          call MPI_RECV(data(i), 1, MPI_DOUBLE_PRECISION,  &
           source, tag2, MPI_COMM_WORLD, status, ierr)
        end do          
        write(*,*)'Data received at Master:',data
        
      end if

!***** Non-master tasks only *****

      if (taskid .gt. MASTER) then     

          
!       Receive array elements from the master task */        
        call MPI_RECV(data(taskid), 1, MPI_DOUBLE_PRECISION, MASTER, &
         tag2, MPI_COMM_WORLD, status, ierr)         
         
        write(*,*)'Data received at worker:',data(taskid)
!
        data(taskid)=data(taskid)*10

!       Send my results back to the master        
        call MPI_SEND(data(taskid), 1, MPI_DOUBLE_PRECISION, MASTER, &
         tag2, MPI_COMM_WORLD, ierr)
!
      endif

      call MPI_FINALIZE(ierr)

      end

Run it as: >mpiexec -n 5 MPI.exe

Results through link of MS-MPI library:

numtasks           5
numtasks           5
numtasks           5
numtasks           5
numtasks           5
Data received at worker:   1.00000000000000
Data received at worker:   2.00000000000000
Data received at worker:   3.00000000000000
Data received at worker:   4.00000000000000
Data received at Master:   10.0000000000000        20.0000000000000
  30.0000000000000        40.0000000000000

Results through link of Intel MPI library:

numtasks           0
numtasks           0
numtasks           0
numtasks           0
numtasks           0
Data received at Master:   1.00000000000000        2.00000000000000
  3.00000000000000        4.00000000000000

Tried all other tasks like registering mpiexec and installing hydra manager as pointed out in the reference manual. Also reverted to the previous release of Intel MPI library and to my dismay found the same issue. Any pointers in this regard would be extremely appreciated.

Kind regards,

Arun

 

 

 

0 Kudos
0 Replies
Reply