Intel® Moderncode for Parallel Architectures
Support for developing parallel programming applications on Intel® Architecture.
1697 Discussions

Allocation of small array gives insufficient virtual memory (41)

Anders_S_1
New Contributor III
4,471 Views
mpiifort /debug:full /warn:interfaces /gen-dep /integer_size:64 /real_size:64 /MP /Qmkl:sequential /Qopenmp-offload- /Qopenmp_stubs /libs:qwin    @gemmpix.txt /exe:prog.exe /link /STACK:50000000,50000000 
mpiexec -localroot -n 2 C:\Data\prog.exe

Hi,

I have run a program in Visual Studio+Fortran + MPI 2017 with no memory problems. I have now also tried to run the program on the command line but get error 41 as soon as I try to allocate a small vector. The commands for mpiifort and mpiexec are given on the top as I was not authorized to upload the bat-files!?

 

Best regards

Anders S 

0 Kudos
16 Replies
jimdempseyatthecove
Honored Contributor III
4,471 Views

What happens when you issue

C:\Data\prog.exe

then

mpiexec -localroot -n 1 C:\Data\prog.exe

Jim Dempsey

0 Kudos
Anders_S_1
New Contributor III
4,471 Views

Hi James,

I took away the three first debug, warn and gen- parameters in the mpiifort command and got rid of the virtual memory problem for the time being. I will try to run the whole program first. then I will add the removed parameters gradually and see if the problem reappears. I will check your suggestion and respond.

Best regards

Anders S

0 Kudos
jimdempseyatthecove
Honored Contributor III
4,471 Views

I'd suggest building the application with: /check:bounds,uninit,pointers

The /warn:interfaces need only be used when you add/change functions and/or subroutines that are not in module or contained procedures. This is a compile time check.

I am assuming your build environment is 64-bit. Is this so?

The error "Insufficient virtual memory" is seldom seen without heap corruption. On Windows, an allocation will tend to succeed due to available Virtual Memory, however sometime shortly thereafter, when you first use the allocated Virtual Memory, it will, page by page, get mapped into the system page file and/or physical RAM. If your program is not corrupting the heap, then look to see if you can increase the system page file size.

Jim Dempsey

0 Kudos
Anders_S_1
New Contributor III
4,471 Views

Hi Jim,

As I told you I took away some parameters and got the code running. I got no problems when I put the parameters back. So let us forget the initial problem for a new observation.

At two occasion I have found that during runtime parameter values appear to change when I call MPI_BCAST to send data from rank 0 to the other ranks.

The first case was solved when I changed memory location of the parameter (a character*3 parameter).

In the second case I replaced BCAST with SCATTER but the change persisted (of the second element in a two-element integer vector).

This seems very odd to me. Have you ever heard about such influence? I will continue to investigate  and hopefully go around the problem. If possible I will try to isolate the problem in a small piece of code.

Another observation is that I now and then get a compile error but when I recompile without doing any code change the error is gone. I am running from the command line. I have never as I remember it experienced this in the VS environment.

Best regards

Anders S

0 Kudos
jimdempseyatthecove
Honored Contributor III
4,471 Views

I haven't heard (experienced) such problems.

For the second element of a two element message to get corrupted:

a) An unlikely error in messaging system resulting in a short memory transfer
b) You have an error in the type/count parameters to the BCAST resulting in short transfer
c) Correct data may have been transferred, but something in the receiver code is stomping on the second element before you use it.

Check on b and c first.

Jim Dempsey

0 Kudos
Anders_S_1
New Contributor III
4,471 Views

Hi Jim,

I reinstalled the cluster studio and now compiling is OK without several recompilings.

Command line execution with mpi and qwin works except when I use MPI_BCAST to copy values in an allocated array from rank 0 to all other ranks. I made up a small test example "test_bcast" which illustrates my problem. Bat files for compiling and run are also attached.

Best regards

Anders   S

0 Kudos
Anders_S_1
New Contributor III
4,471 Views

Hi Jim,

The files disappeared! Here they are again.

Best regards

Anders S

0 Kudos
jimdempseyatthecove
Honored Contributor III
4,471 Views

You have a programming error.

Your MPI_BCAST will work correctly inside the IF(rank.EQ.0) THEN block, however, the other ranks, going around the IF block may execute your write (with read of local array x) prior to the completion of the rank 0 MPI_BCAST. Insert an MPI_BARRIER(MPI_COMM_WORLD,ierr) prior to the write (with read of local array x).

Jim Dempsey

0 Kudos
Anders_S_1
New Contributor III
4,471 Views

Hi Jim,

Thanks for a promt answer! I added the barrier, but it did not solve the problem (see attached screen shot)!

Has it something to do with allocated arrays?

Best regards

Anders S

0 Kudos
Anders_S_1
New Contributor III
4,471 Views

Hi Jim,

I added a -genv I_MPI_DEBUG=6 to the mpiexec command and got the attached data. Otherwise nothing wwas changed.

Best regards

Anders S

0 Kudos
jimdempseyatthecove
Honored Contributor III
4,471 Views

The MPI_BCAST appears to not transfer data on my system as well

Windows 7 Pro x64 running PS IVF v17 u 4 (as 64-bit)

running 2 processes on same system

Jim

0 Kudos
Anders_S_1
New Contributor III
4,471 Views

Hi Jim,

I have the same system. As I told you before, call of BCAST in my application code failed and seemed to cause a change in an other variable having nothing to do with the BCAST process.

Best regards

Anders S

0 Kudos
jimdempseyatthecove
Honored Contributor III
4,471 Views

Arrghhh!

Case of cannot see the forrest for the trees

PROGRAM test_bcast
    USE MPI
    IMPLICIT NONE
    DOUBLE PRECISION,ALLOCATABLE::x(:)
    INTEGER i,ierr,rank,size,tag,istat,root
    !---------------------------------------------------------------------------------------
    ALLOCATE(x(5))
    !---------------------------------------------------------------------------------------
    root = 0
    CALL MPI_INIT(ierr)
    CALL MPI_COMM_SIZE(MPI_COMM_WORLD,size,ierr)
    CALL MPI_COMM_RANK(MPI_COMM_WORLD,rank,ierr)
    !---------------------------------------------------------------------------------------
    x = -1.0
    IF(rank.EQ.0)THEN
        x(1:5)=0.1
    ENDIF
    CALL MPI_BCAST(x,5,MPI_DOUBLE_PRECISION,root,MPI_COMM_WORLD,ierr)
    !---------------------------------------------------------------------------------------
    write(6,111)size,rank,(x(i),i=1,5)
111 format('size,rank,x='2I4,5d12.4)
    !---------------------------------------------------------------------------------------
    CALL MPI_FINALIZE(ierr)
    DEALLOCATE(x)
    IF(rank.EQ.0)    pause "Press key to continue"
    STOP
END

MPI_BCAST is both send/receive

Jim Dempsey

0 Kudos
Anders_S_1
New Contributor III
4,471 Views

Hi Jim,

Thank you for your answer! Initially I looked at a 32-bit tutorial of mpi and used e.g. integer*4 ierr and MPI_INTEGER4 for integers in my 64-bit environment. No error indications but some strange errors. After removing these anomalies everything works fine.

In some cases not all ranks are necessary for the computations and I therefore want to construct a subset REDCOMM of MPI_COMM_WORLD by using MPI_COMM_SPLIT. However, I can not get it right. In the supplied code example I try to define a REDCOMM with size=6 from a MPI_COMM_world with size=8. I have consulted the book by Gropp, Lusk and Skjellum but I can not figure out what is wrong.

Best regards

Anders S

0 Kudos
jimdempseyatthecove
Honored Contributor III
4,471 Views

From you .jpg, I only see one of the rank's output.

Try to rework this: https://stackoverflow.com/questions/22737842/how-are-handles-distributed-after-mpi-comm-split

Jim Dempsey

0 Kudos
Anders_S_1
New Contributor III
4,471 Views

Visiting stack overflow gave the following method to obtain a single reduced communicator REDCOMM with ranks 0,..., size1-1, with size1<size:

IF(rank.LE.size1-1)THEN; color=1; ELSE; color=MPI_UNDEFINED; ENDIF

key=rank

CALL MPI_Comm_split(MPI_COMM_WORLD,color,key,REDCOMM,ierr)

Best regards

Anders S

0 Kudos
Reply