Intel® Fortran Compiler
Build applications that can scale for the future with optimized code designed for Intel® Xeon® and compatible processors.

OpenMP with derived types 16.0.0.110

Jonathan_O_
Beginner
1,988 Views

Dear all,

I noticed when upgrading to 16.0.0.110 that suddenly OpenMP stopped executing tasks in parallel.

A bit of background:
I have a program in which I evaluate tasks using !$OMP PARALLEL DO, in version 15.0.4.221 everything works fine and they are evaluated in parallel, I have verified this by going back to the old version.Changing the version to 16.0.0.110 however makes the code evaluate sequentially.

Not sure if this is something that is already known. I tried making a small test example but the test example didn't have the same issue.

I wanted to post this ticket to get feedback on if this is known or if I should spend more time upgrading my test example. As I'm in the final stage of a PhD, I don't have a lot of time so getting the test example ready could take a while. 

Apologies for not having a lot of information.

Best regards,
Jonathan

 

0 Kudos
9 Replies
jimdempseyatthecove
Honored Contributor III
1,988 Views

Try creating a new project using OpenMP to see if it has the same issues.

If it does, then check the environment variables and compiler options to assure that it is not being instructed to run serially.

If the simple program runs in parallel, insert at the start of the program

!$OMP PARALLEL
write(*,*) omp_get_thread_num()
!$OMP END PARALLEL

If that shows parallelism, but your !$OMP PARALLEL DO does not, then you may have a situation where the !$OMP PARALLEL DO is being issued inside a parallel region with nested parallelism turned off. *** This is not a statement that you need to enable nested parallelism, rather you need to examine the code to determine the proper course of action. It may be you need a !$OMP DO, or something entirely different.

Jim Dempsey

0 Kudos
Jonathan_O_
Beginner
1,988 Views

With the code snippet below in the beginning of my code I get completely different behaviour in the two versions.

As you can see in the attached pictures v15 carries out the do loop in parallel while v16 carries it out sequentially. There are no OpenMP statements prior to this in my code. However if I comment out the remainder of my code the loop works in v16. I do have OpenMP statements further down the line in the code. Can they really interfere?

    !$OMP PARALLEL
    write(*,*) omp_get_thread_num()
    !$OMP END PARALLEL
    
    !$OMP PARALLEL DO
    do i = 1,4
        WRITE(*,*) 'start',i
        call sleep(1)
        WRITE(*,*) 'end',i
    end do
    !$OMP END PARALLEL DO
    
    read(*,*)
    STOP

 

0 Kudos
Kevin_D_Intel
Employee
1,988 Views

You appear to be showing they do interfere which is surprising. I note your mentioning your time is limited at present, so when convenient, maybe it will be possible to further isolate/identify what from your code triggers this behavior by adding back portions of your code along with the snippet you showed and re-testing.

I will inquire with Developers about any possible previous reports or thoughts on what you are observing.

0 Kudos
jimdempseyatthecove
Honored Contributor III
1,988 Views

There appears to be an implicit barrier. (or critical section)

In front of your !$OMP PARALLEL DO insert

write(*,*) "in parallel =", omp_in_parallel()

What does it say?

Second test, out of curiosity, set the loop count for the do i loop to a count equal to or larger than the number of available threads.

Note, what you have should work.

Can you post your list of OMP* and KMP environment variables?

Jim Dempsey

0 Kudos
Jonathan_O_
Beginner
1,988 Views

Hi Jim,

The code snippet below produced the attached output.

I don't have any OMP* or KMP* environment variables set (I tried in cmd by typing "SET OMP" and "SET KMP", not sure if that's sufficient).

You guys have probably figured out by now that I'm not a computer scientist :) 

I'll try to comment out parts of my code to figure out which part is causing this and get back to you.

Thanks,
Jonathan

 

    !$OMP PARALLEL
    write(*,*) omp_get_thread_num()
    !$OMP END PARALLEL
    
    write(*,*) "in parallel =", omp_in_parallel()
    
    !$OMP PARALLEL DO
    do i = 1,10
        WRITE(*,*) 'start',i
        call sleep(1)
        WRITE(*,*) 'end',i
    end do
    !$OMP END PARALLEL DO

 

0 Kudos
jimdempseyatthecove
Honored Contributor III
1,988 Views

This is quite odd. The first parallel region shows the threads startup in somewhat random order (usual), while the second parallel region shows sequential in thread order. Let's probe once more to see if we can find a work around (temporary). Split the PARALLEL and DO:

!$OMP PARALLEL
write(*,*) omp_get_thread_num()
!$OMP DO
do i = 1,10
    WRITE(*,*) 'start',i
    call sleep(1)
    WRITE(*,*) 'end',i
end do
!$OMP END DO
!$OMP END PARALLEL

Jim Dempsey

0 Kudos
jimdempseyatthecove
Honored Contributor III
1,988 Views

I seem to recall an issue with the V16 beta where the WRITE statement would enter a critical section, but then not leave it upon completion of the statement. Try this to see if this is the case:

REAL(8) :: myStart(0:7), myEnd(0:7)
...
!$OMP PARALLEL DO
do i = 1,8
    myStart(omp_get_thread_num()) = omp_get_wtime()
    call sleep(1)
    myEnd(omp_get_thread_num()) = omp_get_wtime()
end do
!$OMP END PARALLEL DO
do i = 0,7
    write(*,*) i,myStart(i), myEnd(i)
end do

The above moves the WRITE statement out of the loop.

Jim Dempsey

0 Kudos
Jonathan_O_
Beginner
1,988 Views

Hmm, actually trying to allocate local variables in that manner throws an access violation. I only had allocatable arrays in this routine before.

"Unhandled exception at 0x00007FF6639326C7 in MAM_x64_non-commercial.exe: 0xC0000005: Access violation reading location 0xFFFFFFFFFFFFFFFF."

 It only happens for arrays. I.e. "real(8) :: test" doesn't throw an error but "real(8) :: test(1)" does.

0 Kudos
Jonathan_O_
Beginner
1,988 Views

Actually this seems to be because I was using compiler options Qinit:snan (and Qinit:arrays)

Attached is the output from your code Jim.

Thanks,

Jon

0 Kudos
Reply