Intel® MPI Library
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.
2154 Discussions

Code hangs when variables value increases

Wee_Beng_T_
Beginner
2,309 Views

Hi,

I have the latest intel mpi 5.1, together with the fortran compiler in my own ubuntu linux. I tried to run my code but it hangs. It was working fine in different clusters before.

I realised the problem lies with mpi_bcast. I wrote a very simple program:

program mpi_bcast_test

implicit none

include 'mpif.h'

integer :: no_vertices,no_surfaces,size,myid,ierr,status

integer, allocatable :: tmp_mpi_data1(:)

call MPI_INIT(ierr)
call MPI_COMM_SIZE(MPI_COMM_WORLD, size, ierr)
call MPI_COMM_RANK(MPI_COMM_WORLD, myid, ierr)

if (myid==0) then

    no_vertices = 1554
    
    no_surfaces = 3104

end if

call MPI_BCAST(no_surfaces,1,MPI_INTEGER,0,MPI_COMM_WORLD,ierr)

call MPI_BCAST(no_vertices,1,MPI_INTEGER,0,MPI_COMM_WORLD,ierr)

allocate (tmp_mpi_data1(3*no_surfaces+11*no_vertices+1), STAT=status)

tmp_mpi_data1 = 0

if (myid==0) tmp_mpi_data1 = 100 

call MPI_BCAST(tmp_mpi_data1,3*no_surfaces+11*no_vertices+1,MPI_INTEGER,0,MPI_COMM_WORLD,ierr)

print *, "myid,tmp_mpi_data1(2)",myid,tmp_mpi_data1(2)

call MPI_FINALIZE(ierr)

end program mpi_bcast_test

If I run as it is, it will hang at :

call MPI_BCAST(tmp_mpi_data1,3*no_surfaces+11*no_vertices+1,MPI_INTEGER,0,MPI_COMM_WORLD,ierr)

But if I change the values of no_vertices and no_surfaces to small values, like 1 or 2, it works without problem.

I wonder why? Is there a bug in intel mpi 5.1 or my own problem?

Thanks

 

 

 

 

 

0 Kudos
22 Replies
James_T_Intel
Moderator
249 Views

No, this has not yet been corrected.

0 Kudos
James_T_Intel
Moderator
249 Views

Our developers have not yet been able to reproduce this.  Can you try running with the latest (5.1 Update 3) or with our 2017 Beta version to see if you still encounter the problem?

0 Kudos
Reply