Intel® Fortran Compiler
Build applications that can scale for the future with optimized code designed for Intel® Xeon® and compatible processors.

allocatable array

yjyincj
Beginner
441 Views
Hi there,
I am concerning the usage of allocatable arrays. Personally, I make statements for allocatable array following the following format:

if(allocated(x)) deallocate(x)
allocate(x(N,M))
...
deallocate(x)
where x is an allocatable array. However, recently, I get run out of virtual memory when I ran a code in windows XP. The error comes out after it has run for a while, indicating the memory is earting up progressively since the beginning. I was just wandering the reason. Anybody has a thought on this?
Another issur bothering me now is the usage of allocatable array in module. For example, in module class_dynamic I define a type Response. The main program calls dynamic and gets a 'rsps', which has been allocated in the subroutine. Then I deallocate rsps%d (as shown by <<< below) for the next loop, avoiding run out of memory. This works for a small program. But I am not sure if this is a good one or a stupid one. Please give your precious points.

module class_dynamic

type Response
real(8),allocatable:: d(:)
end type Response

contains

subroutine dynamic(x,y,z,rsps)
type(Response),intent(out)::rsps
...
end subroutine dynamic

end module class_dynamic

program main
use class_dynamic
....
do while(.true.)
...
call dynamic(x,y,z,rsps)
deallocate(rsps%d) !--------<<<<<<<<<<<<<<<<
end do
end program main
0 Kudos
7 Replies
yjyincj
Beginner
441 Views
Come on. Help me.
Quoting - yjyincj
Hi there,
I am concerning the usage of allocatable arrays. Personally, I make statements for allocatable array following the following format:

if(allocated(x)) deallocate(x)
allocate(x(N,M))
...
deallocate(x)
where x is an allocatable array. However, recently, I get run out of virtual memory when I ran a code in windows XP. The error comes out after it has run for a while, indicating the memory is earting up progressively since the beginning. I was just wandering the reason. Anybody has a thought on this?
Another issur bothering me now is the usage of allocatable array in module. For example, in module class_dynamic I define a type Response. The main program calls dynamic and gets a 'rsps', which has been allocated in the subroutine. Then I deallocate rsps%d (as shown by <<< below) for the next loop, avoiding run out of memory. This works for a small program. But I am not sure if this is a good one or a stupid one. Please give your precious points.

module class_dynamic

type Response
real(8),allocatable:: d(:)
end type Response

contains

subroutine dynamic(x,y,z,rsps)
type(Response),intent(out)::rsps
...
end subroutine dynamic

end module class_dynamic

program main
use class_dynamic
....
do while(.true.)
...
call dynamic(x,y,z,rsps)
deallocate(rsps%d) !--------<<<<<<<<<<<<<<<<
end do
end program main

0 Kudos
Steven_L_Intel1
Employee
441 Views
I'm not really sure what your questions are.

For the first part, it's impossible to speculate on what might cause an out-of-memory error when you don't show complete code. It's possible that you have a coding error in code you have not shown.

For the second part, you don't show where you allocate rsps%d. Since rsps is INTENT(OUT) in DYNAMIC, it is automatically deallocated (if allocated) on entry to the routine. So there is no need to deallocate it manually, assuming that you don't care about the contents outside the loop. (You may want to deallocate it after the loop finishes.
0 Kudos
yjyincj
Beginner
441 Views
I'm not really sure what your questions are.

For the first part, it's impossible to speculate on what might cause an out-of-memory error when you don't show complete code. It's possible that you have a coding error in code you have not shown.

For the second part, you don't show where you allocate rsps%d. Since rsps is INTENT(OUT) in DYNAMIC, it is automatically deallocated (if allocated) on entry to the routine. So there is no need to deallocate it manually, assuming that you don't care about the contents outside the loop. (You may want to deallocate it after the loop finishes.
Ok. The frst question is : I try my best to deallocate allocatable arrays at time after which they will never be used in the program. However, I still get run out of virtual memory error after the program is running a while. So what's is possible reason for such error?

The second one. rsps is allocated in Module dynamic and deallocated in the main program. I am not sure this is a good coding style. Personaly, I feel like it is better to allocate and deallocate arrays in the same program UNIT.
0 Kudos
Steven_L_Intel1
Employee
441 Views

It is very hard to have a memory leak with allocatable arrays. You may have found a compiler bug. If you can provide an example showing the problem, we'll be glad to look at it.

Coding style is often a matter of personal taste. For some kinds of routines, it would make sense to do the allocation in the routine and the deallocation elsewhere. Choose whatever works best for you.
0 Kudos
jimdempseyatthecove
Honored Contributor III
441 Views

Even when all objects allocated are returned you may experience an inflation of memory requirements. This can be due to allocation sequencing (and can be corrected with coding change)

allocate bigA
allocate bigB
deallocate bigA
allocate smallC
allocate bigA
deallocate bigB
allocate smallD
...

Sequences like the above will creap through memory leaving holes not quite large enough for containing bigA or bigB. Eventually Virtual Memory will be consumed into the heap, but surprisingly as available for allocation (as long as theallocation size is less than or equal tobigA-smallC)

If this doesn't help then look in MSDN in VS for "Low-fragmentation heap" and how to enable it (assuming it is warranted).

Jim Dempsey

0 Kudos
yjyincj
Beginner
441 Views

Even when all objects allocated are returned you may experience an inflation of memory requirements. This can be due to allocation sequencing (and can be corrected with coding change)

allocate bigA
allocate bigB
deallocate bigA
allocate smallC
allocate bigA
deallocate bigB
allocate smallD
...

Sequences like the above will creap through memory leaving holes not quite large enough for containing bigA or bigB. Eventually Virtual Memory will be consumed into the heap, but surprisingly as available for allocation (as long as theallocation size is less than or equal tobigA-smallC)

If this doesn't help then look in MSDN in VS for "Low-fragmentation heap" and how to enable it (assuming it is warranted).

Jim Dempsey

This makes sense to me. Thanks.
0 Kudos
yjyincj
Beginner
441 Views

Even when all objects allocated are returned you may experience an inflation of memory requirements. This can be due to allocation sequencing (and can be corrected with coding change)

allocate bigA
allocate bigB
deallocate bigA
allocate smallC
allocate bigA
deallocate bigB
allocate smallD
...

Sequences like the above will creap through memory leaving holes not quite large enough for containing bigA or bigB. Eventually Virtual Memory will be consumed into the heap, but surprisingly as available for allocation (as long as theallocation size is less than or equal tobigA-smallC)

If this doesn't help then look in MSDN in VS for "Low-fragmentation heap" and how to enable it (assuming it is warranted).

Jim Dempsey

Jim, I searched and found the following information

Heap fragmentation is a state in which available memory is broken into small, noncontiguous blocks. When a heap is fragmented, memory allocation can fail even when the total available memory in the heap is enough to satisfy a request, because no single block of memory is large enough. The low-fragmentation heap (LFH) helps to reduce heap fragmentation.

The LFH is not a separate heap. Instead, it is a policy that applications can enable for their heaps. When the LFH is enabled, the system allocates memory in certain predetermined sizes. When an application requests a memory allocation from a heap that has the LFH enabled, the system allocates the smallest block of memory that is large enough to contain the requested size. The system does not use the LFH for allocations larger than 16 KB, whether or not the LFH is enabled.

An application should enable the LFH only for the default heap of the calling process or for private heaps that the application has created. To enable the LFH for a heap, use the GetProcessHeap function to obtain a handle to the default heap of the calling process, or use the handle to a private heap created by the HeapCreate function. Then call the HeapSetInformation function with the handle.

The LFH cannot be enabled for heaps created with HEAP_NO_SERIALIZE or for heaps created with a fixed size. The LFH also cannot be enabled if you are using the heap debugging tools in Debugging Tools for Windows or Microsoft Application Verifier.

After the LFH has been enabled for a heap, it cannot be disabled.

Applications that benefit most from the LFH are multi-threaded applications that allocate memory frequently and use a variety of allocation sizes under 16 KB. However, not all applications benefit from the LFH. To assess the effects of enabling the LFH in your application, use performance profiling data.

I still have no idea how to enable LFH in the fortran codes. Thanks.
0 Kudos
Reply