- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi!
I'm doing a bundle adjustment using posv to solve the system and potri for the covariance matrix. I use very big symmetric matrices ( > 20,000x20,000) so the calculation takes some time. The bundle adjustment is a refining process; some iterations followed by stat check accuracy forms a pass. There are mostly more than one pass for each dataset.
I recorded the time it takes for some processes (iteration, pass, solving, inversion) and I had some strange results. For small datasets (matrices smaller than 10,000), everything is ok; with the MKL library, it is very fast. :) However, with bigger ones, the time is unpredictable: be itabout computation time (MKL library) and initialisation time(program design). Doesthe computation timedepend of the exact size of the matrix, like being a multiple of 2 (which is MKL related "issue")? I read something about it in the User Manual (p. 6-14), but I wasn't sure if it applies to my problem.
Thanks for your help!
I'm doing a bundle adjustment using posv to solve the system and potri for the covariance matrix. I use very big symmetric matrices ( > 20,000x20,000) so the calculation takes some time. The bundle adjustment is a refining process; some iterations followed by stat check accuracy forms a pass. There are mostly more than one pass for each dataset.
I recorded the time it takes for some processes (iteration, pass, solving, inversion) and I had some strange results. For small datasets (matrices smaller than 10,000), everything is ok; with the MKL library, it is very fast. :) However, with bigger ones, the time is unpredictable: be itabout computation time (MKL library) and initialisation time(program design). Doesthe computation timedepend of the exact size of the matrix, like being a multiple of 2 (which is MKL related "issue")? I read something about it in the User Manual (p. 6-14), but I wasn't sure if it applies to my problem.
Thanks for your help!
Link Copied
4 Replies
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello,
For large data set, does the system have enough memory to contain the data? If the data is too large and can notbe kept in the memory, swapping into disk by OS may take longer and unpredictable time.
Thanks,
Chao
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Chao Y!
Thanks for your answer. Indeed, the matrix takes more Gb than the available RAM. So the size of the matrix being a multiple of 2 is not important?
Have a nice day.
Thanks for your answer. Indeed, the matrix takes more Gb than the available RAM. So the size of the matrix being a multiple of 2 is not important?
Have a nice day.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I'm not familiar with the reference about multiples of 2. There is a reference about choosing sizes such that each row or column is 16-byte aligned; that should have most effect on problems much smaller than yours, while the suggestion about what fits in RAM is rightly given for larger problems.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
eskell, just for your references:
see more details about ManagingPerformanceand Memory into User's Guide, chapter 6: codingTechniques.
as an example:
To obtain the best performance with Intel MKL, ensure the following data alignment in your
source code:
Align arrays on 16-byte boundaries.
Make sure leading dimension values (n*element_size) of two-dimensional arrays aredivisible by 16.
For two-dimensional arrays, avoid leading dimension values divisible by 2048.
and etc.
--Gennady
Reply
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page