- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The following case leads to a segmentation fault at run time with ifort. I think that perhaps allocatable components are only supported in OpenMP 4.0 (?), so in that case perhaps it is too early to expect them to be widely implemented. But I'm curious whether or not this is a case that is supposed to work, and if there's any specific plan about whether to implement it soon. gfortran and ifort are the only recent compilers that seem to have an issue with this; pgfortran, nagfor, and xlf2003 all do about what you'd expect, which is to say, the same behavior as if you were using a bare array instead of one that's a derived type component.
use omp_lib, only: omp_get_thread_num implicit none type :: foo integer, allocatable :: a(:) end type foo type(foo) :: bar integer :: i, sum_arr(5) !$omp parallel private (i, bar) allocate(bar%a(3)) !$omp do do i = 1, 5 bar%a = [1, 2, 3] + omp_get_thread_num() sum_arr(i) = sum(bar%a) end do !$omp barrier print *, sum(bar%a) !$omp barrier !$omp single print *, sum(sum_arr) !$omp end single deallocate(bar%a) !$omp end parallel end
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I should add that the specific issue seems to be that "bar" is in a private clause. If you make a shared array of objects, and have each thread use a different object from that array, it works fine. It's only an issue if you try to use the private clause to get the compiler to handle this for you.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Though this is not a segmentation issue, sum_arr(i)=... produces overstrike issue amongst threads.
Does the error occur inside the parallel region or on exit of the routine containing the code snip?
Jim Dempsey
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
It seems to happen inside the region; nothing is printed. If some of the synchronization is taken out, I can get some statements printed, but it still core dumps before it's done.
(I don't know what you mean by "overstrike issue".)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
sum_arr is shared, all threads are writing elements (1:5) concurrently without regard to other threads writing all elements of sum_arr
The value in the sum_arr at each index would be that of the last thread to write at that index. The order of execution may favor a particular order, but it will not be totally deterministic (observed values after do loop may vary).
This is not a situation of a race condition, rather it would be considered a misuse of a shared variable, unless the intent is to observe which thread was last through each sequence of iteration of the loop.
Jim Dempsey
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
If there was no "!$omp do" directive, you would be correct. But the point of "!$omp do" is to divide the loop iterations into non-overlapping subsets. So each iteration is only performed once, and therefore each element of sum_arr is written once and only once, regardless of the number of OpenMP threads.
But this is all a digression. Here's a simpler case that also segfaults, and in this one there is no shared data at all.
use omp_lib, only: omp_get_thread_num implicit none type :: foo integer, allocatable :: a(:) end type foo type(foo) :: bar integer :: i !$omp parallel do private (i, bar) do i = 1, 5 allocate(bar%a(3)) bar%a = [1, 2, 3] + omp_get_thread_num() deallocate(bar%a) end do end
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I do not know whether this is a defect or unsupported usage so I escalated (under internal tracking id below) the issue with your two examples to Development to get their help with further investigating the failure. I will share details of their findings once I hear from them.
(Internal tracking id: DPD200255873)

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page