Intel® Fortran Compiler
Build applications that can scale for the future with optimized code designed for Intel® Xeon® and compatible processors.

Application commit size

jansson
Beginner
631 Views

Hi,

When my console application just started (using F11 in VisualStudio) in windows 7 its "commit size" seen in "Task Manager" is 700MB and Memory (Private Working Set) is 1MB. After doing some calculations this rises to about 20MB.

I want to reduce the commit size because is prevents customers from starting as many instances as they want simultaneously.

One thing that I imagine would affect this is the linker setting: Linker-System: Heap/Stack Reserve/Commit

But I see no effect changing it -> I leave it at the default.

I'm out of ideas, how do I proceed?

Regards,

Magnus

 

 

0 Kudos
9 Replies
Steven_L_Intel1
Employee
631 Views

Is there an actual problem you have experienced? My advice would be to leave this alone. I don't think there is anything you can do here to lower the memory usage - those settings are used to establish a minimum higher than the default. The only one of those properties worth changing is stack reserve size if you want to make it bigger.

0 Kudos
jansson
Beginner
631 Views

I seek to reduce the commit size because it prevents our customers from starting as many instances of the console application as they want. I now see it has been increasing over the years but recently a bit too much.

For those like me who are new to "Commit Size":(Correct me if wrong!)
It seems to me that the sum of the "Commit Size" for each process, as seen in Windows Task Manager, is plotted in the Resource Monitor as "Commit Charge". If starting a new process make the "Commit Charge" exced 100% you get an out of memory error. This will not hapend if you have "enough" of disk space AND let Windows mange the paging file size.

Question:
What parts of my application make it use soo much "commit size" I wonder. If I know what increases Commit Size on windows, I could work around it or consider reducing some dimensions.

Cheers,

Magnus

0 Kudos
Steven_L_Intel1
Employee
631 Views

The commit charge is the amount of pagefile allocation for a process. What you really want to do is reduce the amount of virtual memory use by your application. Some ways of doing this include linking to shared libraries instead of static, not using large static arrays and reducing dynamic memory growth. It is unusual in a properly configured Windows system to have a problem with the commit limit. Changing the linker properties won't help you here as all you can do is increase the pagefile use, not decrease it.

0 Kudos
jansson
Beginner
631 Views

Thanks for your advice I will give your different ways a try.

Cheer,

Magnus

0 Kudos
Steven_L_Intel1
Employee
631 Views

You might want to use the "dynamic memory growth tracking" feature of Intel Inspector XE. This can help you identify memory leaks and unnecessary allocations.

0 Kudos
jimdempseyatthecove
Honored Contributor III
631 Views

http://msdn.microsoft.com/en-us/library/xd3shwhf.aspx

editbin /HEAP:0x100000,0x1000000 YourFile.exe

The above sets 1MB initial Heap (reserve size), and initial commit at 16MB (initial swap file size).

You pick your sizes.

Jim Dempsey

0 Kudos
jansson
Beginner
631 Views

Thank you Jim, I tried editbin. But it did not help, I guess because the space is actually used by variables in my program. Now I will try to reduce static data. Since it have this size at the very start its not dynamic data I believe.

I managed to list the size of objects in the binary by using Linux and nm.

nm --print-size --size-sort your-program.exe | tail -20

This means I can now try to get rid of the largest ones, or convert them to dynamic allocations.
I did not manage to get the same info with dumpbin on windows.

Cheers,

Magnus

 

0 Kudos
jimdempseyatthecove
Honored Contributor III
631 Views

Magnus,

>>This means I can now try to get rid of the largest ones, or convert them to dynamic allocations.

But then if on program start it allocates the same sized arrays, you will still have the issue of the number of instances you can run at any one time. On the other hand, if these arrays were worst case sizes (maximum sized), then allocation to a smaller usable size would reduce the total page file requirement for the number of instance you want to run/load. Note, you can also increase the page file size to much larger than physical memory which would permit more instances to be loaded... provided fewer needed all the reserved or allocated memory. You will have to run some test to see what happens. At some point you will see a lot of page faults.

Depending on what you really need to do, instead of having a script (batch) file spawn a solution for every problem in a folder (n-processes for n-problems), it may be better have the script (batch) file create m-subsets of the n-problems) and then launch m-scripts(batch files), each with its own subset, to process the files in the subset sequentially (batch mode). At least that would be a start point. Alternately, you could encapsulate your program into a subroutine and then call the subroutine from a parallel do or while loop.

Jim Dempsey

0 Kudos
John_Campbell
New Contributor II
631 Views

Allocation of useable size arrays, based on each run condition should provide a better solution all round, certainly no worse. This approach could lead to identifying other savings and probably a better user performance for the conditions that have been identified. Using large unnecessary arrays does have a performance penalty.

I can't see any down side in transiting arrays to ALLOCATABLE, while the review of data structures could identify further gains.

John 

0 Kudos
Reply