- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I was having a look at the HPCG benchmark which is part of intel-oneapi-mkl-common-devel-2022.2.1-2022.2.1-16993.noarch.rpm.
As far as i know, Currently we need to experiment with different problem sizes NX,NY,NZ to find a problem size that can fit in memory.
Is there a formula to derive a problem size for XHPCG that consumes x% of memory theoretically?
or help us to estimate the amount of memory which will be used?
I came across a post https://community.intel.com/t5/Intel-oneAPI-HPC-Toolkit/HPCG-memory-allocation/m-p/1065740
where following is mentioned -
: 896L*nx*ny*nz + 1024*1024
can i interpret it as -
memory in bytes = (896 * 104 * 104 * 104) + (1024 * 1024)
this formula is valid for the latest HPCG which ships with one api/one mkl ? as hpcg.cpp is not present and i am unable to find this code in hpcg.hpp
I was looking for something which we already have for HPL, where say if a system has 1GB memory then
system_memory=1G
system_memory_bytes=1 * 1024 * 1024 * 1024
dp_elements = system_memory_bytes/8
PSize=sqrt(dp_elements)
and for using 80% memory size -
PSize = PSize * 0.8
and further minor adjustment can be made to make PSize as multiple of problem size (NB)
although actual usage may vary based on various factors such as how many MPI ranks have been launched etc.
Link Copied

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page