Intel® Fortran Compiler
Build applications that can scale for the future with optimized code designed for Intel® Xeon® and compatible processors.
29233 Discussions

Serious memory leak when writing data to a binary file

Ben_W_
Beginner
6,657 Views
Hi, all:

I'm having a serious memory leak problem on the WRITE statement in Intel Visual compiler version 12. As demonstrated in the following simple code, I'm trying to write about 400M binary data to a file in direct access, unformatted form. If I compile the code and run the program. I noticed that the Performance Tab in the Windows Task manager shows that the system memory usage is increased by about 400M. After a couple of minutes, the memory usage decreased and returned to the time before running the progam. So I called it a memory leak. This behavior is not accepatble in my real application because I usually write out several gigbyte of data and it uses up my system physcial memory and my application becomes very slow.

It seems that when writing to the file, Windows system caches the data in the memory first and keeps data for a while and then Release it. How can I disable this caching behavior in OPEN/WRITE statements? Or is this a compiler bug in IVF?

Thanks in advance.

Ben

program WriteMemoryLeak

integer :: ios
      
character(36) :: fil

integer, dimension(512) :: p
integer :: k, i
    fil='IntelWriteTest.dat'      
      
    open( 8, file=fil(:len_trim(fil)), access='direct', form='unformatted', recl=kind( 1. )*512, status='unknown', buffered='NO', iostat=ios ) ! DEVWARN: PORTABILITY -- again, this may not be a Good Thing

    do m = 1, 200000
       p=m
       ! write( 8, rec=4+m-1, iostat=i ) p
       call writefile(m, p, 512)
    enddo





end program WriteMemoryLeak

subroutine writefile(k, p, n)
integer :: k, n
integer, dimension(n) :: p

integer :: i

       write( 8, rec=4+k-1, iostat=i ) p



end subroutine writefile
0 Kudos
36 Replies
chris_biddlecombe
1,919 Views

I have tried the same code for opening a large direct access file but without complete success. UOPEN seems to work if I leave out the statement:

FLAGS_ATTR = FLAGS_ATTR + FILE_FLAG_NO_BUFFERING

But with that statement included, the first READ from the file gives me IOSTAT=39 (severe error on read).

I am using Composer 2013.2.149.

0 Kudos
SergeyKostrov
Valued Contributor II
1,919 Views
How large is the file? The error code 39 matches to a system error The disk is full ( on Windows platforms ) and it is very strange. Please provide more details.
0 Kudos
Bernard
Valued Contributor I
1,919 Views

>>>Also I found  avery useful memory monitoring tool called RAMMAP.exe, which can be downlaoded from microsoft, http://technet.microsoft.com/en-us/sysinternals/ff700229.aspx. This tool shows all cached memory for each individual file plus a lot  of other information.>>>

I am bit late and I am glad thsat your problem has been solved.I wanted to recommend you a M.Russinovich RamMap tool for tracking memory consumption in the system.For finding memory leak you can use umdh tool from windows debugging tools.UMDH will provide you also a call stack of the offending thread(s) and option to track memory usage as a function of time.More advanced option could be usage of windbg invasive break when you could put a breakpoint on Heap freeing and destroying routines.

Link://blogs.msdn.com/b/ntdebugging/archive/2012/04/26/troubleshooting-memory-leaks-with-just-a-dump.aspx

0 Kudos
chris_biddlecombe
1,919 Views

Sergey,

I wish it was that simple. The file is less than 10MBytes on a disk with 350GBytes free.

Chris

0 Kudos
andrew_4619
Honored Contributor III
1,919 Views

Where is the file located? W7 puts restrictions and does some strange things with some folders e.g. "Program Files"

0 Kudos
chris_biddlecombe
1,919 Views

The file is on my D: drive which is all user file space. The system (including Program Files) is on C:

0 Kudos
John_Campbell
New Contributor II
1,919 Views

Why do you want to turn buffering off ?
I would use standard fortran direct access I/O and FLUSH when required.

John

0 Kudos
chris_biddlecombe
1,919 Views

It appears that Windows allows the memory used by buffering to grow to fill the machine so that eventually the system becomes unresponsive. The odd thing is that the total memory in use increases but the memory used by the application doesn't. "intel@breault.com" reports that using USEROPEN fixes the problem for him. I am asking for more information so I can discover why it does not work for me in very similar circumstances.

0 Kudos
chris_biddlecombe
1,919 Views

Found in the small print of CreateFile() a restriction that the record length should be an integer multiple of the volume sector size. So, having changed my code to use a slightly larger record size which satisfies that criterion, I can now switch buffering off and memory does not grow.

0 Kudos
SergeyKostrov
Valued Contributor II
1,919 Views
>>...It appears that Windows allows the memory used by buffering to grow to fill the machine so that eventually >>the system becomes unresponsive... Chris, Thereare no any unknowns here and you need to check Virtual Memory ( VM ) settings using Systems applet from Control Panel. In one of my VM test I was able to allocate ~1.95GB of memory on a computer with 128MB of physical memory ( 32-bit Windows 2000 Professional ). It is a special system that allows to simulate embedded environment with limited system resources.
0 Kudos
chris_biddlecombe
1,919 Views

Sergey,

I think you ight have missed the point of this story. No memory is being explicitly allocated. The size of the process does not grow. But with a standard OPEN (i.e without the USEROPEN option) the total system memory usage grows until the system becomes unresponsive.

After implementing USEROPEN with FILE_FLAG_NO_BUFFERING, I no longer see any increase in the total system memory usage.

Chris

0 Kudos
jimdempseyatthecove
Honored Contributor III
1,919 Views

Too much candy gives one a bellyache.

0 Kudos
John_Campbell
New Contributor II
1,919 Views

I think the problem could be a clash between the operating system's disk cacheing and ifort's buffering.
I did tests on another compiler, comparing the buffering performance between XP and Win 7. With Win 7, there was a significant increase in the amount of (unused) memory being allocated to disk cacheing and at times appeared to be taking too much memory, especially for files larger than 2gb. However, the net performance was significantly improved. There is a sweet spot for disk cacheing when the active scope of the file being used is less than the total memory installed. (It was also my impression when first comparing Win 7 to XP that the too much memory was being taken for cacheing, as is the memory leakage claim of this post)
There could be a conflict between the operating system cacheing and ifort's standard fortran BUFFERED='YES'. However I would again expect that there is a net improvement in performance.
My earlier testing (on another compiler) showed that for standard Fortran direct access files, their performance was just as good as the direct system routines that have been suggested in this discussion. Direct access fixed length record files should not pose an efficiency problem.
I have also been surprised by the poor performance of ifort's BUFFERED='NO' and I am surprised this is the default.
Perhaps Intel could check if there is a clash between the OS disk cacheing in Win7 (and Win8, which I have not tested) and BUFFERED='YES'. Having multiple layers of buffering can be counter productive.

John

0 Kudos
John_Campbell
New Contributor II
1,918 Views

I could not get UOPEN to work, but I did test fortran I/O with and without BUFFERED='YES'.
Someone might like to add the UOPEN option to the attached test and supply their run time results.
When runing task manager with this test, the memory usage for disk cacheing is clearly evident, but is only using the vacant memory pool.
The attached test was run on Win7, 12gb memory and 128gb SSD. 
The elapsed times for the different options are a bit mixed; not as I would have expected.
Writing as Buffered=yes and 1.6gb size appears to be surprisingly slow.
There is a run time difference between CLOSE and CLOSE (status='delete')

I ran alternatives of:
- with and without BUFFERED='YES'
- 400mb and 1,600mb file size
- rewriting an existing file or a new file
It would be good to see the UOPEN option performance. Probably requires a change to the compile and run batch file.

If you are getting a problem with cacheing and 400mb files, how much memory is installed ?

John

ps: I have attached an updated version of the test, to allow easier modification of the UOPEN option, if someone can help.

0 Kudos
Bernard
Valued Contributor I
1,918 Views

>>>It appears that Windows allows the memory used by buffering to grow to fill the machine so that eventually
>>the system becomes unresponsive.

By buffering do you mean using a cache manager to store in memory recent file I/O operations?

0 Kudos
Bernard
Valued Contributor I
1,918 Views

For troubleshooting you can use cacheset utility.For more advanced usage it is possible to use kernel debugger(in order to confirm responsibility for large memory allocation) and put breakpoints on memory manager allocation routines.One of such a routine will be MmAdjustWorkingSetSize which is responsible for trimming working set.I think that perfmon counters like AsyncCopyRead and LazyWriter needs to be monitored when your fortran application performs disk I/O operation.

0 Kudos
Reply