Intel® Fortran Compiler
Build applications that can scale for the future with optimized code designed for Intel® Xeon® and compatible processors.
28709 Discussions

Aug 30th (Wednesday) Townhall with the Intel® Fortran Compiler Developers

Ron_Green
Moderator
5,305 Views

Wednesday August 30th

9:00am US PDT

Webinar Overview Page

Registration and Join here

 

Townhall with the Intel® Fortran Compiler Developers

You have heard all about The Next Chapter for the Intel® Fortran Compiler.  Now it’s your turn to give us your feedback on our compiler in this townhall webinar with the developers behind the Intel® Fortran Compiler. We will have a short preview of what is coming in version 2024.0. And we will show off our new uninitialized memory checking feature in our Linux* compiler version 2023.2.0.  But the real focus of this meeting is to give you a chance get your questions answered.  Bring your questions, bring your suggestions, and we look forward to a sharing the latest information on our Fortran compiler.

 

Click here to watch the replay on demand. Other webinars are also listed there that you may find interesting.

Labels (1)
23 Replies
AlHill
Super User
880 Views

@willey11   Such insight....

 

Doc (not an Intel employee or contractor)
[Maybe Windows 12 will be better]

0 Kudos
John_Campbell
New Contributor II
608 Views

I joined in on the following webinar:

(apologies as I could not find a more direct thread)

The Case for OpenMP: Why ISO Fortran Is Not Enough for Heterogeneous Parallelism

Date: Wednesday, November 8, 2023, 9:00-10:00 AM PST

 

It discussed data offloading for !$OMP PARALLEL DO and also DO CONCURRENT.

I found this presentation very interesting, although watching live from 4:00am local time was a challenge.

It mainly considered management of data transfers between different / hetrogeneous memory types that can be encountered in OpenMP GPU off-loading of tasks, especially in manageing when they should or should not occur. It discussed OpenMP directives that are becoming available to more efficiently manage these data transfers. Previously, I was not familiar with this definition of “hetrogeneous” memory types when consider data off-loading to GPUs, or with the concept of using multiple GPUs in the same hardware environment !

The presentation also suggested the lack of available controls for “do concurrent”, should any multi-threading be attempted by the compiler. “do concurrent” looks naked in the present Fortran standard.

It certainly made me consider the lack of data transfer management between different memory types that are avalilable for what may be considered “homogeneous” memory implementations of OpenMP, where the management of data transfers between memory, and multiple levels of cache available for each thread. This is a significant inefficiency in my implementations that involve multi-thread use of large memory data sets on dual-channel memory where memory transfer bandwidth often stalls performance. I am not aware of any directive approaches to improving cache efficiency ?

I am envious that this discussed data transfer problem is being addressed for other OpenMP implementations.

Another area where memory transfer inefficiency has been addressed in Modern Fortran is in the use of temporary array sections, so it is not the case that issues of data transfer problems have not been addressed in Fortran implementations. The possibility of non-contiguous memory is a managed issue!

There must be a question of for how long will Fortran be hardware agnostic, especially as this presentation has shown that OpenMP directives can be used to address memory transfer inefficiency.

 

I hope there can be a more coordinated solution between Fortran and OpenMP !

 

I would recommend this webinar to those who have not yet viewed it.

0 Kudos
jimdempseyatthecove
Honored Contributor III
598 Views

From my understanding, internally when DO CONCURRENT generates parallel code, it creates its thread team using the Intel OpenMP library. In this respect, an otherwise serial program would experience similar overhead in thread team establishment and dispatch.

 

Be aware though, that should the application be threaded (say OpenMP), then should the DO CONCURRENT appear in a parallel region, and nesting enabled, then the DO CONCURRENT will establish a new thread team for the thread executing the DO CONCURRENT. IOW you may get blind-sided with oversubscription of threads.

This is not to say you shouldn't use DO CONCURRENT, there are many built-in features of DO CONCURRENT that are not present in the !$omp directives, rather it is something to be aware of for threading oversubscription possibilities.

 

Jim Dempsey

0 Kudos
Reply