- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello,
There are some very impressive memory vs mpi process plots in the excellent mkl presentation:
https://cerfacs.fr/wp-content/uploads/2016/03/Kalinkin.pdf
but its a little confusing what the memory requirements are, is the original matrix needed on each node? Sounds like it is from :
https://software.intel.com/en-us/mkl-developer-reference-fortran-dss-distributed-symmetric-matrix-storage
"The algorithm ensures that the memory required to keep internal data on each MPI process is decreased when the number of MPI processes in a run increases. However, the solver requires that matrix A and some other internal arrays completely fit into the memory of each MPI process."
Any thoughts appreciated, thanks!
Don
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Donald,
You are right, in presentation which you mentioned initial matrix stored on master process. However in last releases of MKL we implemented distributed reordering ( iparm[1] = 10 ) which doesn't combine matrix on one processor.
Thanks,
Alex

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page