Intel® C++ Compiler
Community support and assistance for creating C++ code that runs on platforms based on Intel® processors.
7957 Discussions

Change in behavior in OpenMP "collapse"

Armando_Lazaro_Alami
258 Views

After updating into Intel(R) C++ Compiler XE 14.0.2.176 [IA-32] I found a different treatment for variables in collapsed control loops. Previously I used Intel C++ 13.x

For version 13.xxx  the following code was correct :

int i,j,k;

#pragma omp parallel for collapse(3)
    for (k=0; k< h->nz;  k++)
    for (j=0; j< h->ny ; j++)
    for (i=0; i< h->nx ; i++)
    {
       Point3DType p3d;

       XYZ(h, i, j, k, &p3d);
       d2pc[(h->nx * h->ny)*k + h->nx * j + i] = kdNNS(&p3d);
    }

For version 14.0.2.176 it needs to be modified into:

#pragma omp parallel for collapse(3) private (k,j,i)
    for (k=0; k< h->nz;  k++)
    for (j=0; j< h->ny ; j++)
    for (i=0; i< h->nx ; i++)
    {
       Point3DType p3d;

       XYZ(h, i, j, k, &p3d);
       d2pc[(h->nx * h->ny)*k + h->nx * j + i] = kdNNS(&p3d);
    }

I had to declare as private at less j and i.

I do not know if it is mandatory to declare i and j as private according to OpenMP definition. But I found important to share my case with others because it was not easy to locate the source of the problem and there are lot of examples of use of "collapse" omitting private declaration for the control variables affected.

 

0 Kudos
1 Reply
QIAOMIN_Q_
New Contributor I
258 Views

Hello Armando,

As to 'there are lot of examples of use of "collapse" omitting private declaration for the control variables affected' ,in this case ,i think you should use the Intel-specific pragma

"#pragma omp threadprivate (i,j)" ,which will specify a list of globally-visible variables that will be allocated private to each thread.

Then "private (k,j,i)" need not explicitly added any more.

 

Thank you.
--
QIAOMIN.Q
Intel Developer Support
Please participate in our redesigned community support web site:

User forums:                   http://software.intel.com/en-us/forums/

0 Kudos
Reply