- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I´m compiling (ifx -o test -O0 -g -traceback -check all MyTest.f90) the following code (MyTest.f90) on Linux (wsl/Ubuntu):
program mytest
implicit none
integer :: n = 5
write(*, '(i5)' ) 12345
write(*, '(i<n>)') 12345
end program
Running it the first write statement does as expected, the second gives a stack overflow when "-check" is specified or segmentation fault without "-check" or when using ifort instead.
ifx version is 2024.2.1, ifort 2021.13.1.
No problem with the same code on Windows with oneApi 2025. I have not yet tried 2025 on Linux.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I am not Intel, but I have some theories as to why there is a difference. VFEs are a very tricky thing to implement, and I'd guess that they had to be shoehorned into LLVM somehow and the implementation is suboptimal, especially in the presence of OpenMP. If I were still in Intel support, I would not consider this a bug, but rather a usage that deserves some further investigation.
My recommendation is to recode the application to not use this ancient extension. The standard Fortran language now has features that can eliminate the need for VFEs.
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks for pointing out this is non-standard, I was not aware of that.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The reason I even started building on Linux was the hope to find the reason for a major performance degression in our production code (windows) moving from ifort to ifx. Now here you go, it is the use of variable format expressions!
In one module I use it to indent logging. I made a comparison of this module´s runtime for combinations of using/not using variable format expressions (<x>?), ifx/ifort and different numbers of threads as the code in question is omp-parallelized (all writes in locked sections of course).
The result shows that with ifort there is no significant performance difference with/without variable format expressions. Using ifx however performance is significantly worse with variable format expressions even for single threaded runs, and as more threads are added performance plumets.
These are the test results:
<x>? | Compiler | Threads | runtime
-------+----------+---------+--------
TRUE | ifx | 1 | 2.545
TRUE | ifx | 2 | 3.832
TRUE | ifx | 4 | 9.882
TRUE | ifx | 8 | 17.48
TRUE | ifort | 1 | 0.952
TRUE | ifort | 2 | 0.542
TRUE | ifort | 4 | 0.337
TRUE | ifort | 8 | 0.321
FALSE | ifx | 1 | 0.845
FALSE | ifx | 2 | 0.5
FALSE | ifx | 4 | 0.35
FALSE | ifx | 8 | 0.302
FALSE | ifort | 1 | 1.045
FALSE | ifort | 2 | 0.473
FALSE | ifort | 4 | 0.293
FALSE | ifort | 8 | 0.372
I´ll replace all variable format expressions, no problem - I´m mainly reporting this for other´s reference. Maybe it would make sense to warn against using this feature while it behaves like this? Or on Linux have compilation fail rather than execution if it cannot be fixed?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Intel, do you acknowledge this as a bug?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I am not Intel, but I have some theories as to why there is a difference. VFEs are a very tricky thing to implement, and I'd guess that they had to be shoehorned into LLVM somehow and the implementation is suboptimal, especially in the presence of OpenMP. If I were still in Intel support, I would not consider this a bug, but rather a usage that deserves some further investigation.
My recommendation is to recode the application to not use this ancient extension. The standard Fortran language now has features that can eliminate the need for VFEs.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks for the insight, Steve, very interesting, especially the possible relation to OpenMP. I agree it is not a big deal to replace VFE, although I kind of like the readability. I have already replaced all usages now, still a bit curious about how this will be handled.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Unless one of the Intel support people want to pick this up, I'd assume it won't be looked at unless someone with paid support complains about it. Even if they do, I'd expect the reaction of the developers would be to treat it as low priority, and I would not blame them.
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page