Intel® C++ Compiler
Community support and assistance for creating C++ code that runs on platforms based on Intel® processors.

Unpredictable results in OpenMP code fixed by inlining function

Andrey_Vladimirov
New Contributor III
456 Views

I have a rather complex, large code in C parallelized with OpenMP and Intel C compiler 16.0.1.159. The code produced slightly different results in every run until I did a fix shown below (not because of data races, as discussed below). I cannot explain this fix, and it is not satisfactory anyway. In addition, there was an intermediate fix, which also looked strange. On top of that, I ran the code through Intel Inspector, and it did not detect any data races. Unfortunately, I could not come up with a minimal reproducer.

Could somebody help with a hypothetical explanation of what I am observing?

Initial code implementation was somewhat like this, and it produced bad results (different results from one run to another). The real code has many more levels of function nesting than shown, but this, I think, is a good prototype:

void FuncA(DataType* d) {
  *d = ...;
}

void FuncB(DataType* d){
  FuncA(d);
}

void FuncC(DataType** data){
#pragma omp parallel for
  for (int i = 0; i < n; i++) {
     FuncB(data);
  }
}

 

Fix #1 shown below - putting the call to the innermost function in a critical region - worked (code produced correct results):

void FuncA(DataType* d) {
  *d = ...;
}

void FuncB(DataType* d){
#pragma omp critical
  {
    FuncA(d);
  }
}

void FuncC(DataType** data){
#pragma omp parallel for
  for (int i = 0; i < n; i++) {
     FuncB(data);
  }
}

 

Fix #2 shown below - putting the entire body of the innermost function in a critical region - did not work, code produced different results every run. Question 1: why does this not work when Fix #1 works?

void FuncA(DataType* d) {
#pragma omp critical
  {
    *d = ...;
  }
}

void FuncB(DataType* d){
  FuncA(d);
}

void FuncC(DataType** data){
#pragma omp parallel for
  for (int i = 0; i < n; i++) {
     FuncB(data);
  }
}

 

Finally, Fix #3 shown below - declaring the innermost function as inline - kind of worked. The code produced correct results every time at the point where the original code failed, however, results crumbled later in the execution:

inline void FuncA(DataType* d) {
  *d = ...;
}

void FuncB(DataType* d){
  FuncA(d);
}

void FuncC(DataType** data){
#pragma omp parallel for
  for (int i = 0; i < n; i++) {
     FuncB(data);
  }
}

 

 

Question 2: the last case makes me think that OpenMP perhaps has a limit on the call stack depth. Is that correct?

 

Once again, I am as sure as I can be that there are no memory leaks or data races in FuncA or FuncB. This was verified by eye as well as by Intel Inspector.

0 Kudos
5 Replies
Andrey_C_Intel1
Employee
456 Views

Hi Andrey,

If your "data" array does not have duplicated pointers those can be worked on by different threads (obvious race condition), then the problem could be somewhere in compiler-generated code.  Could you ensure that you checked your code with "correct" mode of Intel Inspector?  It can look at memory errors and threading errors, you need the latter.  And turn on the max level of verbosity (it should have three levels, and can work at the simplest one by default).

Other than that it is hard to suppose what the problem is without reproducer.

As to the call stack depth, it has no limits as far as thread's stack size is not exceeded (you should get crash if it is).

Regards,
Andrey

0 Kudos
KitturGanesh
Employee
456 Views

Hi Andrey C, thanks for responding per my request! I thought of asking Andrey a reproducer as well, since the data array in the block are accessed by multiple threads (not in critical section) can result in data races as well and not thread safe. 

Hi Andrey, appreciate a reproducer, thanks!

_Kittur

 

0 Kudos
jimdempseyatthecove
Honored Contributor III
456 Views

>>If your "data" array does not have duplicated pointers those can be worked on by different threads (obvious race condition)

That or the code inside FuncA is not thread safe. Examples are each different DataType object manipulating a different DataType object (e.g. force calculation between objects) or the code in (or called by) FuncA has sections that require it not be reentrant.

In the situation of the second (reentrancy) issue where the problem is not self-obvious, move your critical section to inside FuncA, first encompassing the entire body (should work), then move either the top or bottom of the critical section towards the middle. Essentially close the range of code protected by the critical section. Run your verification test until error is located, then back-off whatever end you moved until you can locate which statement or group of statements caused the problem. Note, this is not necessarily as easy to do as describe here.

Your formal fix, should try to rework the code such that a critical section is not required (as this interferes with multi-threaded performance).

Jim Dempsey

0 Kudos
Andrey_Vladimirov
New Contributor III
456 Views

Thank you all for advice!

After many hours of investigation, the problem turned out to be caused by a data race that occurred where I had not looked before. The "fixes" listed above were not reliable: apparently, they delayed the occurrence of the data race, but in some runs it still popped up.

Lesson learned: if it looks like a data race, but I can swear that it is not a data race, it likely is a data race.

0 Kudos
KitturGanesh
Employee
456 Views

:-) I agree, and thx for the update as well Andrey....  -Kittur

0 Kudos
Reply