- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello
I noticed that in devcloud it is available the beta version of ifort 2021 which should support openMP 5.
I tried to compile this small code
program vec_mult use iso_fortran_env, only: dp => real64 use omp_lib , only : omp_get_team_num integer, parameter :: N =1000000 integer :: i, omp_get_thread_num external :: omp_get_thread_num real(dp) :: p(N), v1(N), v2(N), dx, sum integer :: flag,t(12) dx = 4._dp * atan(1._dp)/ dble(N) s = 0._dp !$omp target map(from:p) !$omp parallel do do i=1,N p(i) = sin(dble(i)*dx)*sin(dble(i)*dx)*dx end do t = 0 !$omp end target !$omp target map(tofrom:s) map(to:p) !$omp parallel do reduction(+:sum) do i =1, N sum = sum + p(i) end do !$omp end target print *, sum !, sum(p) end program
but the compiler issues an ICE: segmentation violation signal raised
the error is caused by the reduction, withou it compiles and run ...
Pietro
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Typically Intel wants you to use the Online Service Center to report bugs in beta versions.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
With the release of the beta version of Intel oneAPI, Fortran users are posting their issues on this forum. DevCloud users are compiling with a beta version of 2021. i.e. Intel(R) Fortran Intel(R) 64 Compiler for applications running on Intel(R) 64, Version 2021.1 Beta Build 20191031
What compiler options are you using?
There are some known limitations with the beta Fortran compiler available with oneAPI. Check out the Release Notes.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi I am just using
just
ifx -qopenmp
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thank you. I duplicated your problem, the ICE. I successfully compiled it with the nightly build of the compiler. Look for it to work in the beta release due in January.
BTW... the ifx command will be going away. Use "ifort -qnextgen -qopenmp" instead.
For others who may not know about the beta Fortran compiler available with oneAPI beta here's some more information.
The Fortran next generation code generator that is invoked when compiling with the -qnextgen compiler option does not yet support all language features.
At present it supports:
- All Fortran 77 language features, except for alternate entries
- Much of Fortran 90. Pointers, derived types and others are not yet fully supported
- Most of the OpenMP 4.5 directives
Features are continually being added. Check the release notes for more details and the latest news.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
One more tip... check out the Getting Started Guide. It's mostly about C++, but there's a bit there about Fortran. There are additional compiler options required when you use !$OMP TARGET to use the accelerator. There's also information about some useful environment variables. I'm partial to LIBOMPTARGET_PROFILE.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks a lot !!!
looking forward for the next version.
Pietro
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I have a query about OpenMP 5.0
A bit of background for basis of question: One of my Xeon systems has dual Knights Corner coprocessors. The KNC has been discontinued. The programming paradigm permitted offload programming that was not part of the OpenMP syntax (an off loaded kernel could use OpenMP within the KNC), but was functionally similar with the copy clause and other clauses/directives. The KNC offload could be used to offload to a KNC across a network.
One of the questions/feature I couldn't get answered before, was could the KNC offload paradigm be used to offload to a different node host CPU. IOW use the offload feature in place of using OpenMPI or coarray (over MPI). No reply on this.
What I would like to know is, does the OpenMP 5.0 permit offloading across a cluster, and or remote system ( e.g. TCPIP or fabric) to the host CPU(s) on a node as opposed to a GPU located locally or remotely.
If, yes, then can someone show a simple example by adapting the matrix multiply given in the getting started guid (see post #5 for link).
Jim Dempsey
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page