I am trying to run a simple OpenMP matrix multiplication code with array sizes 200*200.
I found out that when my code is compiled with optimizations disabled such as "icc -openmp -O0 matmul.c" and when executed a segfault occurs. However, when compiled simply "icc -openmp matmul.c" the code works properly.
I would like to disable all the optimizations when running the openmp code, that's is why I need -O0.
It looks like you need at least to link with -traceback, if not build with -g, to get a useful traceback. One thing that sometimes happens is that optimization has a similar effect as OpenMP private specification of a variable. Of course, technically, it's better to declare all necessary privates rather than depend on optimization.
In your code you should make i and k private variables, since otherwise i and k will be shared across the diffent threads and your inner looks to not execute correctly. With optimization the compiler hides this from your eyes, because it will keep i and k in registers.
For j, you don't need to do anything since OpenMP defines the loop counter of the loop associated with the "parallel for" is automatically privatized.
OpenMP specifies that the outer for loop index defaults to private, but the inner ones must be given local scope explicitly, as Michael showed you. If you are able to use C99 or C++, you have the alternative:
#pragma omp parallel for
for (j=0; j
for(int i=0; i
for (int k=0; k Fortran OpenMP rules make all the inner loop indices private automatically. The compiler optimizer may happen to treat C for loops in a similar way without barfing over the programming error. Evidently, you have no assurance of the detailed behavior in such a case.