- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
My main program allocates around 12 arrays (the largest is only about 10x10x241x3x5, although I would like to make that 241 number grow as high as 600 or so...), and then passes them back and forth as arguments into subroutines. EACH SUBROUTINE is encompassed in its own module. The dummy arguments of the subroutines are assumed-shape arrays that feed off of the the actual arguments. The subroutines execute in sequence, and the last of them needs to allocate (statically or dynamically, I have tried both) a bunch more rather large arrays (250x100000). This last one is the one where I have started running into problems. BUT I cannot understand why my program uses quite so much memory up to that point. It seems like a lot of memory is allocated at compile time, which is strange to me... I have a 2GB system (32-bit), and the Windows page file has been tinkered with endlessly.
Specific questions: (1) are assumed-shape arrays allocated when a subroutine begins executing, and deallocated when it ends, or are they allocated in some form at compile time and remain throughout the program run? (2) does the module structure (encompassing each subroutine into a module) make the program allocate a lot of memory at compile time?? (3) is there anything else that's memory-suspect in what i have written so far?
Thanks!!
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
If you declare arrays in a module (other than ALLOCATABLE or POINTER), they are compile-time allocated.
You may find this article I wrote some years back helpful.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Your article was indeed helpful! I have tried to calculate how much memory my program uses, based on the number of real(8) arrays that it allocates statically/dynamically. I found that for the number of arrays that are allocated at beginning of the program (and they are the ones that are passed back and forth), the total amount of memory that is needed is a puny 16.5 megs (16500000 bytes).
The bottleneck occurs in the last routine, as I mentioned before. There are 8 (250x100000) arrays being allocated there (for a total of 1.4G), plus some 100000-dim arrays (the biggest set of these is only 4Mb), being allocated-deallocated in groups, sequentially . Whether I allocate them statically or dynamically, at some point, the program runs out of virtual memory. If allocate them ALL statically, the program runs out of *physical* memory at the very start. I get that if I allocate 1.4G + at compile time, that leaves little physical memory - I guess that given that my comp has to run other things, this is a severe constraint (yes?). But given that my total amount of physical + virtual memory is 4G (2+2, right?), even if the program needs some memory to run, and given that I try carefully to deallocate as soon as I can, why is 1.4G such a bottleneck??
Is this about virtual memory fragmentation or something? I don't really understand it, but... how to avoid it? Should I just switch away from Windows ASAP??
I know you are busy, and I thank you for your time!
Irina
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Then you'll be able to dynamically allocate very large arrays without bumping into the 2GB wall. Note that you must use dynamic allocation - static allocation is still limited to 2GB.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Final question: I have discovered that my arrays are non-contiguous. But I have also read that the /O3 option in the compiler takes care of that automatically. Does it, in fact, do so? (especially within subroutines that just get these arrays as arguments, so at compile time, they are loops about "empty" arrays?) And, does this drain memory, or is this just about the computational time?
Thanks a lot.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I would suggest spending a little time attempting to debug your "illegal .dll relocation" on the 64-bit OS. If your DLL has this error then suspect you might be loading a 32-bit only version of the DLL and it may not be sensitive to the longer pointers on the 64-bit system.
If you give up and would like an idea as to how to run your application on a 32-bit O/S there is one potential way.
Intel Fortran has an add-on for multi-processor support on physically seperate systems. The technology is called MPI (Message Passing Interface). This is intended to be compiled into your application in a semi-seamless way to spread an application accross systems. However, of interest to you, you can run MPI on the same system.
Your 32-bit system is capable of running multiple processes (applications) each with ~1.5GB of Virtual Memory. Using MPI you might "easily" distribute the large arrays across several processes on your system.
You can limp along on this technique until you find a solution to the DLL problem. Also note. Using MPI you could potentially distribute your applicaiton across several desktops at your location to get better performance.
Jim Dempsey
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks a lot for your responses. I know about MPI, have not learned it yet, but that is the plan. As for the .dll bug, it has just mysteriously disappeared! (Well, not entirely, I worked on it for a while). So the 64-bit system is back in operation too...
I appreciate your help; I was able to solve some long-standing issues.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
You are welcome.
If your system is dual boot (32-bit and 64-bit) then potentially you may have had an environment variable issue that caused the 32-bit library module to be selected over the 64-bit library module. The environment variable issue could be due to one of
a) incorrect global initial environment variable setting
b) incorrect user initial environment setting
c) incorrect build environment selection (usually a .BAT file that sets the environment variables).
Jim Dempsey
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I have VS2008 and intel fortran in X64 XP OS. I have run a program which have many allocateable matrix that have size of 2000 X 2000 X 20. During running the application said that I have not enough virtual memory, but I make virtual memory bigger but the problem also exist.
I have 8Gb physical memory but the program do not use it and when the program run page file increases to 2 Gb and hen the program print the error sign.
I would like to know how can I set the visual studio and intel fortran to use just the physical memory or how can I solve the problem.
Thanks.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page