- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
My intend for this post is to know how to handle the array which actual size is much less than its maximum possible size. I'm doing diagonalization of a huge sparse symmetric/hermitian matrix and storing matrix in row-major upper triangular storage format. A priori we do not know exactly how many entities are non-zero but we know the maximum limit what could be if all the entities are non-zero. Since we can not partially deallocate of an allocated array, only allocate and deallocate statements seem inefficient. Is there any naive way to handle such a problem without wasting memory ?
Link Copied
2 Replies
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Maybe one need not worry too much about memory consumption unless the size ofthe matrices reaches or even exceeds the physical memory in your machine.
If a chunk of memory is allocated but never used then you don't need to worry. Such memory remains virtual and is never mapped to physical memory. This behaviour can beobserved: See the discrepancy between the VIRT and RES columns when monitoring a process with "top". The memory mapping granularity of the OS is a page, typicallyit has 4 KB.
The bad news is that even if you use onlya singlebytein a page then itwill bemapped into physical memory by the OS. So ifthere are nolarge contiguous unusedregions (e.g. larger than 1 page) of memory then wereally have a problem, in this case memory is indeed "wasted".
The only way around thatis to construct the matrix in 2 stages. First do a sweep to count elements, then allocate the right size and construct the matrix in a second sweep. It's the classical trade of runtime for memory.
michael
If a chunk of memory is allocated but never used then you don't need to worry. Such memory remains virtual and is never mapped to physical memory. This behaviour can beobserved: See the discrepancy between the VIRT and RES columns when monitoring a process with "top". The memory mapping granularity of the OS is a page, typicallyit has 4 KB.
The bad news is that even if you use onlya singlebytein a page then itwill bemapped into physical memory by the OS. So ifthere are nolarge contiguous unusedregions (e.g. larger than 1 page) of memory then wereally have a problem, in this case memory is indeed "wasted".
The only way around thatis to construct the matrix in 2 stages. First do a sweep to count elements, then allocate the right size and construct the matrix in a second sweep. It's the classical trade of runtime for memory.
michael
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks Michael. I already thought about ''the way around'' what you suggested. Although it is good but time consuming. As far as size of the matrix is concerned it is very huge thats why I am using Lanczos algorithm. Your point about virtual and physical memories seems important. I will see on this fact.

Reply
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page