- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear all,
I am running PARDISO to solve sparse symmetric indefinite matrices. Since matrices may be very large, we want to estimate how many memory PARDISO will use given matrix size n and number of nonzero terms nz. In this post https://software.intel.com/en-us/forums/topic/474289, an estimated method is introduced: 1024 * max(iparm(15), iparm(16)+iparm(17)) + n*nrhs*32 for in-core mode. Also, there is a MKL function mkl_peak_mem_usage() that can report memory usage information. However, we cannot obtain the information before executing PARDISO. I wonder if there is any way that can estimate memory usage only by matrix size n and number of nonzero terms nz. I think the reordering algorithm may have some affects on memory usage, but I don't know how to analyze it.
Regards,
Gisiu
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
It will be difficult to predict the memory usage in advance as it depends on the matrix type and sparsity pattern.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Vipin,
If only a rough upper bound needed? Or if we specify an reordering algorithm (say, Nested Dissection) and choose a particular number of thread. Is it still hard to make a prediction?
Regards,
Gisiu
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Gisiu,
It will be still impossible as from our experiments, we the sizes differ drastically from our estimate and the real usage.
But, it's possible after reordering step (not in advance as we mentioned) as you know and the estimator is max(iparm(15), iparm(16)+iparm(17)).
Vipin
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Vipin,
Thanks for your reply. It seems that predict memory usage before reordering is impossible. But we want to do something like this: we have to solve many big matrices. The reason we want to make a prediction is because we want to do parallel distributed processing. If we can estimate the memory usage, we can arrange proper number of matrices to each computer. Otherwise, too much memory needed for matrices may lead to crashing. Do you have any recommended method to do deal with this kind of problem?
Regards,
Gisiu
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page