Intel® Moderncode for Parallel Architectures
Support for developing parallel programming applications on Intel® Architecture.

problem with data share about calling dll in openmp

ollehyas
Beginner
373 Views
Hi, guys, long time no see.

Recently, I am busy with some numerical computation. In the work, I have to call a Fortran function from a dynamic link library in my c++ code. The Fortran dynamic link library was made with old syntax-F77, having many COMMON statements. The Fortran function is called many times in a loop for some purpose with different input parameters. To speed up, I parallel the loop like this:

#pragma omp paralle for schedule(static)
for(int i=0;i<1000;i++)
{
....
Fortran_function(parameter_1,parameter_2);
.....
}
when debuging the code with Intel Parallel Debugger Extension, I found some data race. But, after fixed the bugs, the results from parallel is rather different from sequential results. In the loopall variabls are defined as local, so I doubt whether the reason is from the Fortran DLL. Is the COMMON statement in Fortran DLL causing share conflict? How to fix it?
0 Kudos
2 Replies
jimdempseyatthecove
Honored Contributor III
373 Views
If your DLL function is using "local" arrays .AND. if the function is not declared with RECURSIVE (or with -openpm) then the local arrays are attributed withSAVE. IOW

real:: vec(3)

is a SAVE'e (shared)array.

Your DLL function has to be compiled either a) with RECURSIVE or b) with -openmp option (eventhough it is not using !$OMP... code.

Jim Dempsey
0 Kudos
TimP
Honored Contributor III
373 Views
Jim's comment is correct for Intel Fortran. A subroutine which is to be active in a parallel region should have the RECURSIVE attribute, or be compiled with a non-default thread safety option e.g. -auto -openmp .....
A C compiler with default auto definition of an array within a parallel region will make the array private automatically. You would have trouble if a shared array (as when defined outside the parallel region, including a static) is updated by more than one thread.
0 Kudos
Reply