- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I have a Fortran DLL called from a C++ application which has now been running very reliably for a long time. Recently we refactored the C++ application and have found that in one very particular (and reproducible) set of circumstances, a call to a Fortran routine which allocates a ~120MB block of memory [using "allocate (..., STAT = alloc_err)"] fails with error code 41, 'FOR$IOS_INSVIRMEM', indicating:
"The Intel Fortran RTL attempted to exceed its available virtual memory while dynamically allocating space. To overcome this problem, investigate increasing the data limit. Before you try to run this program again, wait until the new system resources take effect."
The error does not occur if I reduce the size of the block requested.
On the face of it, what's happening should be bloomin' obvious: I've run out of memory. However:
- I can check the memory footprint of the application when this happens, and it's never particularly large; always <1GB, whereas when it's working properly the same application regularly consumes up to about 1.5GB without complaining.
- my computer runs 64-bit Windows 7 and has 8GB of physical memory; according to Task Manager, when the failure occurs the total physical memory usage is never more than ~55%.
- the same Fortran code called from the same C++ code compiled with different options runs flawlessly. Basically, we can compile the C++ application as either a single executable or as a lightweight executable plus a DLL, with the Fortran code being called from the DLL. The memory error occurs in the DLL configuration, but not when the C++ code is compiled as a single executable. The memory footprint appears to be very similar in each case.
So, my question is whether there is anything else that could give rise to this error, apart from the obvious?
Alternatively, is the operating system likely to be allocating the Fortran code a different amount of memory when called from a DLL, compared to when called from a .exe file? If so, how does one control this?
Thanks,
STephen.
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
You can get error 41 if there isn't an available contiguous block of memory, even if there are enough free bytes in total.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
A crock - perhaps. Solves your problem - likely.
Jim Dempsey
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
To testSteve's theory that maybe the problem was memory fragmentation, I broke my single ~120MB allocation statement (which actually attempted to allocate 24 different arrays, each ~5MB in size) into its constituent parts and checked the value of STAT after each one. The first dozen or so worked fine, but about halfway through one of them returned error code 41 again and the whole thing folded.
So the problem definitely looks as though it's something to do with the total amount of memory allocated, rather than the way it is spread out.
I did another experiment, namely trying to allocate a buffer of a similar size in the C++ code, immediately before the call to the Fortran was made. This also failed. Therefore, the picture that's emerging is that it isn't a problem with the Fortran runtime itself, but a genuine shortage of memory available to the application, even though there should be plenty of system resources available.
It looks like I'll have to seek the solution elsewhere.
Many thanks,
Stephen.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Stephen.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
This may seem like a stupid question to ask....
You are on Windows 7 x64, presumably you are using MS Visual Studio.
When you create a project in MSVS the default is for 32-bit project which will compile an run on your x64 system. For 64-bit you must create a new configuration (called x64 from the 32-bit version). Have you done this?
Your earliest post did not state if you were building a 32-bit app or a 64-bit app (only that you were on a 64-bit platform).
If you are building a 64-bit app, then browse your system properties for virtual memory settings. Maybe your paging file is too small. Note, if your system and paging fileis on a tiny SSD then you might run into this problem.
Jim Dempsey
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Actually, a very perceptive question...
As it happens the C++ application is compiled using a makefile, currently in 32-bit mode. The reason for this is that the refactoring has been done by a consultant who runs the application on his PC running 32-bit Windows 7, and I've just been using his makefile to compile it on mine.Onhis platform, with a total of 2GB of physical memory available and without using the /LARGEADDRESSAWARE switch, it runs without any problems, so I still have a really hard time believing that >2GB of address space is really necessary.
There's obviously something subtle going on here that I don't fully understand, but at least for the time being I can run the application without it failing.
Stephen.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
If so, do you have the CD?
If so, do you have Windows 7 "Pro"
If so, do you know you can "boot" a virtual computer.
If so, you can (should be able to)load Windows XP 32-bit into the virtual machine.
Then load your compiler and other utilities and updates.
Now you have replicated a virtual environment of your consultant.
(or dust off your old development system)
This may be related to how Windows 7 is partitioning the 4GB virtual address space.
e.g. half to the O/S, a chunk for the WoW (don't know which half), a chunk for the DLL's, then what's left to your application (plus static libraries).
I just thought of something that may get you through this pinch without going through the above hoop jumping (which you might want to do anyway).
Do what you can to reduce the size of your DLL. Compile it targeting your machine only, remove Interprocedural Optimizations (excessive inlining). For the portions you know are error free, strip out the debug symbols. Depending on code bloat (due to excessive inlineing and debug symbols) you might halve the footprint of your DLL. If you get 60MB you might get up and running.
Also check for options that specify excessive stack size (linker option for main thread and KMP_... option for OpenMP if applicable). With any luck it will be a tuning issue and you will get up and running asap.
Jim Dempsey
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Using the /LARGEADDRESSAWARE switch I can now run the application on my computer anyway, so I don't need to spend any more time on the problem unless it comes back to bite me. Just for completeness, however, I installed the problematic version of the software (compiled without the switch) onto my old laptop, which runs Vista on a 32-bit Core Duo chip along with 3GB of physical memory, and I found that it ran fine; so this issue does seem to be restricted to my particular computer, or at least to 64-bit platforms.
At some point I'll recompile the code in native 64-bit mode to see if that makes any difference, but it's a low priority for now.
Stephen.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Native 64-bit will be the way to go, however you may have some transition problems. Note, I have not personally experienced transition problems in the program I've developed and used. However, when installing some "generic" benchmark programs which rely heavily on 3rd party libraries, I did run into problems. Most of the problems related with programmers using "unsigned int" for pointers.
x64 code will tend to require more memory than x32. RAM is quite cheap now and I think it is hard to find a development classsystem with under 8GB now. Execution size may be an issue if your program is sold or used on platforms out of your control. So you may continue to have a requirement of shoehorning your code into a smaller footprint. (dust off an old copy of Borland Turbo C++ which had capabilities for overlays).
Jim Dempsey

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page