I am going to upgrade to a newer version of the Intel C++ compiler (current is 2015) but I experience problems when optimizing large files/functions. In extreme cases the Intel compiler (versions 2017 and 2018) allocates up to 156 GB of RAM and works for about half an hour to compile and optimize just a single file. In version 2015 it takes 2-3 GB of RAM and several seconds.
I have a ton of such files and since they are large, rewriting them is not an option. I can't send this to support because I will have to send the whole project (this bug applies for large files/functions which include a lot of staff) and strict intelectual regulations apply in my organization which prevent me from distributing code.
I wonder, is there some way to turn on/off particular optimizations included in, say, O1 so that I can experiment and see what exactly causes this behavior? Also, is there a way to limit the max memory used by the compiler similar to how one can limit the amount of heap space a Java program uses?
I use the following flags:
-O2 -fno-omit-frame-pointer -fPIC -fvisibility=hidden -fvisibility-inlines-hidden -fmath-errno -qopenmp -qoverride-limits -fp-model precise -pthread
I have the same problem when compiling Fortran code. Removing -qoverride-limits does not improve the situation. Neither does switching to O1.