Intel® C++ Compiler
Community support and assistance for creating C++ code that runs on platforms based on Intel® processors.

-O0 -openmp generates a segfault

meriko
Beginner
643 Views
Hello,
I am trying to run a simple OpenMP matrix multiplication code with array sizes 200*200.
I found out that when my code is compiled with optimizations disabled such as "icc -openmp -O0 matmul.c" and when executed a segfault occurs. However, when compiled simply "icc -openmp matmul.c" the code works properly.
I would like to disable all the optimizations when running the openmp code, that's is why I need -O0.
Can anyone help with the problem?
Thank you,
0 Kudos
6 Replies
Brandon_H_Intel
Employee
643 Views

Is it a stack overflow segfault? If it is, you can use "ulimit -s 999999999" or some other large number to increase the stack size.

If not, what's the seg fault and where's it happening? Can you get a trace from gdb?

0 Kudos
meriko
Beginner
643 Views
I doubt that it is stack overflow segfault, because the executable runs correctly without -O0 flag.
So below is the trace from gdb:
[New Thread 0x7f1194d79710 (LWP 9376)]
[New Thread 0x7f1193f70710 (LWP 9377)]
[New Thread 0x7f1192f6f710 (LWP 9378)]
[New Thread 0x7f1191f6e710 (LWP 9379)]
[New Thread 0x7f118bfff710 (LWP 9380)]
[New Thread 0x7f118affe710 (LWP 9381)]
[New Thread 0x7f1189ffd710 (LWP 9382)]
[New Thread 0x7f1188ffc710 (LWP 9383)]
Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7f1189ffd710 (LWP 9382)]
0x0000000000400ef8 in L_main_68__par_loop0_2_27 () at matmul_p.c:74
74 c += a * b;
(gdb) bt
#0 0x0000000000400ef8 in L_main_68__par_loop0_2_27 () at matmul_p.c:74
#1 0x00007f1194c323d3 in __kmp_invoke_microtask ()
from /opt/intel/Compiler/11.1/072/lib/intel64/libiomp5.so
#2 0x00007f1194c0f796 in __kmpc_invoke_task_func ()
from /opt/intel/Compiler/11.1/072/lib/intel64/libiomp5.so
#3 0x00007f1194c108e3 in __kmp_launch_thread ()
from /opt/intel/Compiler/11.1/072/lib/intel64/libiomp5.so
#4 0x00007f1194c38347 in ?? ()
from /opt/intel/Compiler/11.1/072/lib/intel64/libiomp5.so
#5 0x00007f11944dba4f in start_thread () from /lib64/libpthread.so.0
#6 0x00007f119424582d in clone () from /lib64/libc.so.6
#7 0x0000000000000000 in ?? ()
Thanks,
0 Kudos
TimP
Honored Contributor III
643 Views
It looks like you need at least to link with -traceback, if not build with -g, to get a useful traceback.
One thing that sometimes happens is that optimization has a similar effect as OpenMP private specification of a variable. Of course, technically, it's better to declare all necessary privates rather than depend on optimization.
0 Kudos
meriko
Beginner
643 Views
Actually I compiled with -g to get the above backtrace.
Perhaps someone could try to run the matrix multiply code with the given flags to regenerate the problem.
Below is the code:
#include
#include
#include
#include
#define NRA 200
int main (int argc, char *argv[])
{
int tid, nthreads, i, j, k, chunk;
double a[NRA][NRA], /* matrix A to be multiplied */
b[NRA][NRA], /* matrix B to be multiplied */
c[NRA][NRA]; /* result matrix C */
/*** Initialize matrices ***/
for (i=0; i
for (j=0; j
a= i+j;
b = i*j;
c = 0;
}
/*** Do matrix multiply sharing iterations on outer loop ***/
#pragma omp parallel for
for (j=0; j
for(i=0; i
for (k=0; k
{
c += a * b;
}
printf("The %f \n", c[3][2]);
printf("******************************************************\n");
return 0;
}
Thanks,
0 Kudos
Michael_K_Intel2
Employee
643 Views
Hi,

In your code you should make i and k private variables, since otherwise i and k will be shared across the diffent threads and your inner looks to not execute correctly. With optimization the compiler hides this from your eyes, because it will keep i and k in registers.

For j, you don't need to do anything since OpenMP defines the loop counter of the loop associated with the "parallel for" is automatically privatized.

So your code snippet should look like this:

#pragma omp parallel for private (i, k)
for (j=0; j
for(i=0; i
for (k=0; k
{
c += a * b;
}

Cheers,
-michael
0 Kudos
TimP
Honored Contributor III
643 Views
OpenMP specifies that the outer for loop index defaults to private, but the inner ones must be given local scope explicitly, as Michael showed you. If you are able to use C99 or C++, you have the alternative:
#pragma omp parallel for
for (j=0; j
for(int i=0; i
for (int k=0; k
Fortran OpenMP rules make all the inner loop indices private automatically. The compiler optimizer may happen to treat C for loops in a similar way without barfing over the programming error. Evidently, you have no assurance of the detailed behavior in such a case.
0 Kudos
Reply