Intel® Fortran Compiler
Build applications that can scale for the future with optimized code designed for Intel® Xeon® and compatible processors.

forrtl: severe (408): fort: (8): Attempt to fetch from allocatable variable G_COARSWRK when it is not allocated

saeed_p_
Beginner
1,180 Views

Dear All 

I am debugging my code and I saw a weird message which is shown below.

forrtl: severe (408): fort: (8): Attempt to fetch from allocatable variable G_COARSWRK when it is not allocated  

The problem is that I already checked the code and noticed that the variable was allocated as you can see below

output of code :

[0] MPI startup(): Multi-threaded optimized library
[2] MPI startup(): shm data transfer mode
[0] MPI startup(): shm data transfer mode
[1] MPI startup(): shm data transfer mode
[3] MPI startup(): shm data transfer mode
[0] MPI startup(): Rank    Pid      Node name             Pin cpu
[0] MPI startup(): 0       58392    ceft01.grid.fe.up.pt  {0,1,2,3,4,5,6,7,32,33,34,35,36,37,38,39}
[0] MPI startup(): 1       58393    ceft01.grid.fe.up.pt  {8,9,10,11,12,13,14,15,40,41,42,43,44,45,46,47}
[0] MPI startup(): 2       58394    ceft01.grid.fe.up.pt  {16,17,18,19,20,21,22,23,48,49,50,51,52,53,54,55}
[0] MPI startup(): 3       58395    ceft01.grid.fe.up.pt  {24,25,26,27,28,29,30,31,56,57,58,59,60,61,62,63}
[0] MPI startup(): I_MPI_DEBUG=5
[0] MPI startup(): I_MPI_INFO_NUMA_NODE_NUM=2
[0] MPI startup(): I_MPI_PIN_MAPPING=4:0 0,1 8,2 16,3 24
 L_CoarsWrk is allocated
 G_CoarsWrk is allocated
--------------------------------------------------------------------------------------
 
but I do not know why I get this message. The funny thing is when I use mpirun (more than 1 processor to run) it happens and It does not show any error while running the code with one cpu. 
 
I used following commands.
 
  REAL(prec), ALLOCATABLE, DIMENSION(:,:,:), SAVE :: &
       L_CoarsWrk, G_CoarsWrk
 
    ! Working arrays for temporal spectra
    ALLOCATE(L_CoarsWrk(coars_j,coars_k,nL1))
    IF(master) print*,'L_CoarsWrk is allocated'
    IF(master)THEN
        ALLOCATE(G_CoarsWrk(coars_j,coars_k,nG1))
       print*,'G_CoarsWrk is allocated'
    ENDIF
 
     IF(master.AND.DEBGON)THEN
        WRITE(logfl,'(/,"DBG: m_stats_setup OK")')
        FLUSH(logfl)
     ENDIF
 
I used  -assume realloc_lhs flag as some colleagues recommended but still get nowhere.
Best regards
 
0 Kudos
3 Replies
Navdeep_Rana
Beginner
1,180 Views
IF(master)THEN

    ALLOCATE(G_CoarsWrk(coars_j,coars_k,nG1))

    print*,'G_CoarsWrk is allocated'

ENDIF

This allocates G_CoarsWrk only on master. Are you sure you are not accessing it in any other process? As you said, it doesn't happen when

you run just one process (which is the master), I suspect this is the case.

0 Kudos
saeed_p_
Beginner
1,180 Views
Hi
the only command that is used by G_CoarsWrk array is in "CALL MPI_Gather" which It transfers the data L_CoarsWrk of all processors to G_CoarsWrk in master  processor.
 
 CALL MPI_Gather(L_CoarsWrk, coars_jcoars_knL1, MPI_REAL, &
         G_CoarsWrk, coars_jcoars_knL1, MPI_REAL, 0, world, ierr)
    
    ! --- output through master
    IF(master)THEN
       WRITE(ofl) (((G_CoarsWrk(j,k,i), j=1,coars_j), &
            k=1,coars_k), i=2,nG1,stat_inci)
    ENDIF
 
L_CoarsWrk  array is defined for all processors however G_CoarsWrk is defined for master.
 
Best regards
0 Kudos
Navdeep_Rana
Beginner
1,180 Views

Is rank=0 your master?

Can you reproduce it in a demo program and paste it here?

This example works for me.

program mpi_bug
  use mpi
  implicit none
  integer :: n,send_count
  integer :: mpi_rank,mpi_procs
  integer :: mpi_error_flag
  integer,allocatable,dimension(:) :: p,q

  call mpi_init(mpi_error_flag)
  call mpi_comm_rank(mpi_comm_world, mpi_rank,mpi_error_flag)
  call mpi_comm_size(mpi_comm_world, mpi_procs, mpi_error_flag)

  n = 4
  send_count = 2
  allocate(p(1:n))
  if (mpi_rank == 0) then
    allocate(q(1:2*n))
    q(:) = 0
    write(*,*) q
  endif

  p(:) = mpi_rank+1
  call mpi_gather(p,send_count,MPI_INTEGER,q,send_count,MPI_INTEGER,0,mpi_comm_world,mpi_error_flag)
  if (mpi_rank == 0) then
    write(*,*) q
  endif
  call mpi_finalize(mpi_error_flag)
end program mpi_bug
0 Kudos
Reply