Intel® C++ Compiler
Community support and assistance for creating C++ code that runs on platforms based on Intel® processors.
The Intel sign-in experience has changed to support enhanced security controls. If you sign in, click here for more information.
7782 Discussions

Performance detoriation for certain array sizes


Hi Folks,

I don't know if this is the right place to post this issue, if not please let me know where it would be more appropriate to post it?
I'm having the following problem, using icpc 15.0.3, but I don't think this depeneds on the compiler bur rather on the CPU architecture.
Consider the following code in C++:

struct Tuple{
  size_t _a; 
  size_t _b; 
  size_t _c; 
  size_t _d; 
  size_t _e; 
  size_t _f; 
  size_t _g; 
  size_t _h; 

deref_A(Tuple& aTuple, const size_t& aIdx) {
  aTuple._a = A[aIdx];

deref_AB(Tuple& aTuple, const size_t& aIdx) {
  aTuple._a = A[aIdx];
  aTuple._b = B[aIdx];


deref_ABCDEFG(Tuple& aTuple, const size_t& aIdx) {
  aTuple._a = A[aIdx];
  aTuple._b = B[aIdx];
  aTuple._c = C[aIdx];
  aTuple._d = D[aIdx];
  aTuple._e = E[aIdx];
  aTuple._f = F[aIdx];
  aTuple._g = G[aIdx];

Note that A, B, C, ..., G are simple arrays (declared globally). Arrays are filled with integers.

The methods "deref_*", simply assign some values from arrays to the given struct parameter "aTuple". So I first start by assigning to a single field of the given struct, and continue all the way to all fields. That is, each method assigns one more field than the previous one.  The methods "deref_*" are called with index (aIdx) ranging from 0, to MAX size of the arrays (arrays have the same size by the way).

Now, consider the graph attached, which depicts the performance for array sizes starting with 20 million (size_t = 8 bytes) integers, up to 24 m.

When the arrays contain 21 million integers (size_t), the performance degrades for the methods touching at least 5 different arrays (i.e., deref_ACDE...G), therefore you will see peaks in the graph. I'm wondering why this is happening? This is happening only when I'm testing on a server with CPU:  Intel(R) Xeon(R) CPU E5-2690 v2 @ 3.00GHz, but not with Haswell, i.e., v3. Clearly this is a known issue to Intel and has been resolved, but I don't know what it is, and how to improve the code for v2.

I would highly appreciate any hint from your side.

To even better illustrate the problem, consider the second graph, which depicts the running time for three different CPUs. This time, it is tested by 8 array accesses (i.e., only the method deref_ABCDEFG) for different array size. X-axes shows array size as power of 2, i.e., exponents of 2.

0 Kudos
0 Replies