Intel® C++ Compiler
Community support and assistance for creating C++ code that runs on platforms based on Intel® processors.
7944 Discussions

Crashes and large memory usage compiling large amounts of class data in an array.

jcoffland
Beginner
253 Views
I've noticed serious problems when compiling fairly large arrays containing class data on the Intel compiler in both Windows and Linux v11.1.

I've attached a fairly simple example which constructs an array of the following simple class:
[cpp]class A {
  int x;
public:
  A(int x) : x(x) {}
}; [/cpp]
The array is constructed like this:
[cpp]A data[] = {A(1),A(2),A(3),A(4),A(5)...};[/cpp]

The attached example has 200,000 entries and causes the compiler to crash after using over 1/2GB of memory. I compiled with this command:
[bash]icpc -c test.cpp[/bash]
This is a simplified example. I've experienced this problem several times in real-world code. I've reported a similar version of this problem using std::string in an array and didn't get much response.

See: http://software.intel.com/en-us/forums/showthread.php?t=69665

It does not require 200,000 array entries to trigger. I've noticed serious slow downs with a few 100 array entries and s larger class structure.

Additionally, I've noticed huge memory usage with very large character arrays. Memory usage in the range of several GB for nested character arrays of a few MB in total. I worked around this by splitting up the data into several smaller files.

Please escalate this issue as it has caused serious problems in my application development using the Intel compiler on at least three different occasions.
0 Kudos
3 Replies
aazue
New Contributor I
253 Views
Hi
As your previous thread have not receive answer for it resolving,and that you have also partially
situate the problem with your remark:

(I've noticed serious slow downs with a few 100 array entries and s larger class structure.)

I think it could be judicious to divide task, by step sequential. if is possible for you. memory size busy that
you give is too large for this type task (Intel compiler).
Commonly an program that require too large part to memory of system is always rejected by the control quality.
I know and i understand that is not easy to divide sometimes , but when you have not other choice possible.
The optimization compiler could enlarge memory with supposed in standard is not used
but if large size is required by default this size is added (for tasks parallel asynchronous)
could be result more catastrophic. If is difficult for you for dividing task it could be yet more difficult
for compiler in automatically. (as similar with all possibles vectorization conflicts)

Regards

(Strange result here when you using Opera browser) ?? Optimized i think .....
0 Kudos
levicki
Valued Contributor I
253 Views
If you have not already done that, then I suggest that you register to premier.intel.com for support and submit this bug report for the compiler along with a test case you posted here.

Jennifer, Brandon, or even I could do that on your behalf, but if you do it yourself then you will get an answer faster, and quite possibly a hotfixed compiler executable before the fix gets rolled into an official update.
0 Kudos
Dale_S_Intel
Employee
253 Views
Sorry for the delay in responding on this issue, I missed it before. If your original file is really just a big array initialization, you might want to try -O0. Both gcc and icc take an exceedingly long time with this file at -O2.
In the meantime, I'm investigating to see if there's some potential for improvement.
Dale
0 Kudos
Reply