Intel® Fortran Compiler
Build applications that can scale for the future with optimized code designed for Intel® Xeon® and compatible processors.

running ifort on the gfortran testsuite

Janus
New Contributor I
1,663 Views

Hi,

as reported in another thread and on c.l.f. (https://groups.google.com/forum/?fromgroups#!topic/comp.lang.fortran/AIHRQ2kJv3c), I recently tried running different compilers (including ifort 18) on the gfortran testsuite, via a cmake script:

https://gist.github.com/janusw/17a294125d6956bea736a20c409e7881

My current results indicate that ifort 18 passes on 82% of all 1553 tests that were used, and fails on 281 tests. Although I'm sure that my methodology is not perfect (for details see the c.l.f. link and the script on github), and probably not all of  those 281 are actual issues with ifort, it is still quite easy to identify certain cases that very likely point to ifort bugs.

The first such category is regressions (cases that worked with earlier ifort versions, but fail with 18.1). Here's a list:

+     19 - aliasing_array_result_1.f90 (Failed)
+     22 - alloc_comp_assign_1.f90 (Failed)
+     70 - allocatable_scalar_10.f90 (Failed)
+    127 - array_constructor_45.f90 (Failed)
+    182 - associate_28.f90 (Failed)
+    347 - class_to_type_4.f90 (Failed)
+    398 - common_2.f90 (Failed)
+    811 - intrinsic_modulo_1.f90 (Failed)
+    901 - minloc_3.f90 (Failed)
+    967 - namelist_42.f90 (Failed)
+    968 - namelist_43.f90 (Failed)
+    1064 - pdt_13.f03 (Failed)
+    1322 - scalar_mask_2.f90 (Failed)
+    1398 - submodule_1.f08 (Failed)
+    1469 - typebound_operator_15.f90 (Failed)

 

The numbers represent an arbitrary test numbering, the files names correspond to the files in the gfortran test suite. In the worst case each of those files corresponds to a bug in ifort18, but I haven't checked them case by case.

On top of these there are several internal compilers errors (each of which by definition is a bug):

associate_20.f03(25): catastrophic error: **Internal compiler error: internal abort**
class_allocate_10.f03: catastrophic error: **Internal compiler error: segmentation violation signal raised**
class_allocate_8.f03: catastrophic error: **Internal compiler error: segmentation violation signal raised**
class_assign_1.f08(44): error #5270: Internal Compiler Error: symbol not a SYMTOK
parameter_array_init_4.f90: catastrophic error: **Internal compiler error: segmentation violation signal raised**
spread_init_expr.f03: catastrophic error: **Internal compiler error: segmentation violation signal raised**
submodule_30.f08: catastrophic error: **Internal compiler error: segmentation violation signal raised**
submodule_31.f08(20): error #5270: Internal Compiler Error: symbol not a SYMTOK
submodule_31.f08(36): error #5270: Internal Compiler Error: symbol not a SYMTOK
typebound_operator_9.f03: catastrophic error: **Internal compiler error: segmentation violation signal raised**

 

And then there are 60 instances of runtime segfaults, which are also quite likely to be bugs. I won't list them all here, but my script can be easily used to get a complete list of all 281 failures. I think it would be great if some of the above bugs could be fixed for ifort 18.2.

Cheers,

Janus

 

0 Kudos
1 Solution
Steve_Lionel
Honored Contributor III
1,656 Views

It may be that Intel is unable, for legal reasons, to access and use the gfortran test suite. Like many commercial software vendors, Intel has strict policies about use of open-source software. That's not to say it can't be done, but it's something that the Intel development team would have to discuss with Intel's lawyers first.

Intel, of course, has its own test suite. It's also likely that at least some of the gfortran tests assume incorrect behavior - I have seen that before (Intel's tests are not immune from that.)

It's good that you tried this. After getting approval, someone would have to perform triage on the tests - some might be more important than others. (For example, incorrect syntax, often found in so-called "negative tests", leading to an ICE is less important than an ICE from a correct program.)

View solution in original post

0 Kudos
35 Replies
Steve_Lionel
Honored Contributor III
1,657 Views

It may be that Intel is unable, for legal reasons, to access and use the gfortran test suite. Like many commercial software vendors, Intel has strict policies about use of open-source software. That's not to say it can't be done, but it's something that the Intel development team would have to discuss with Intel's lawyers first.

Intel, of course, has its own test suite. It's also likely that at least some of the gfortran tests assume incorrect behavior - I have seen that before (Intel's tests are not immune from that.)

It's good that you tried this. After getting approval, someone would have to perform triage on the tests - some might be more important than others. (For example, incorrect syntax, often found in so-called "negative tests", leading to an ICE is less important than an ICE from a correct program.)

0 Kudos
Janus
New Contributor I
1,220 Views

Hi Steve,

thanks for your comments!

 

Steve Lionel (Ret.) wrote:

It may be that Intel is unable, for legal reasons, to access and use the gfortran test suite. Like many commercial software vendors, Intel has strict policies about use of open-source software. That's not to say it can't be done, but it's something that the Intel development team would have to discuss with Intel's lawyers first.

Huh, really? That thought didn't even occur to me. The point where it starts to become problematic is probably where Intel might want to include these tests in their own proprietary testsuite, right? I'm not a lawyer, but I guess the gfortran tests are technically GPL'd code, just as everything else in the GCC repo.

But then again, they don't even need to be included in Intel's testsuite if there is a publicly available open-source testsuite lying around that you can test against without too much effort.

 

Intel, of course, has its own test suite. It's also likely that at least some of the gfortran tests assume incorrect behavior - I have seen that before (Intel's tests are not immune from that.)

Sure, there might be false assumptions in a small minority of the cases, but I'm pretty sure the gfortran team would be happy about bugreports regarding those tests ;)

This is a classic case where everyone profits from a bit of collaboration. Intel gets a better compiler, GCC gets a better testsuite (leading to a better compiler as well). Everyone is happy.

 

 

It's good that you tried this. After getting approval, someone would have to perform triage on the tests - some might be more important than others. (For example, incorrect syntax, often found in so-called "negative tests", leading to an ICE is less important than an ICE from a correct program.)

Agreed. Note that I picked only the runtime tests ("dg-do run"), which are supposed to be valid Fortran, so no incorrect syntax should be involved.

Out of the 15 regressions listed above, these three fail due to a runtime segfault:

alloc_comp_assign_1.f90
intrinsic_modulo_1.f90
minloc_3.f90

They might be the first candidates to look at. I dearly hope someone will actually take the time to do that, and maybe the other ones too (those mostly abort at runtime due to supposedly wrong results or throw some other runtime errors).

Cheers,

Janus

 

0 Kudos
Steve_Lionel
Honored Contributor III
1,220 Views

I remember from my time at Intel that ANY use of open-source tools, even those that did not have code that made it into the product, required explicit approval. There's a big difference between GPL2 and GPL3, for example. In essence, it would be considered VERY BAD for an Intel developer to just grab the gfortran test suite and start using it without prior review and clearance. We had to undergo yearly training about this, and even then there were sometimes slip-ups.

There is great collaboration among the various development teams - bug reports to the other teams were quite common in all directions as all of them generally have all the other compilers on hand to test (well, with the exception of those that didn't run on the platform, and there we'd just send the other team a test and ask them to run it.)

0 Kudos
Janus
New Contributor I
1,220 Views

Steve Lionel (Ret.) wrote:

I remember from my time at Intel that ANY use of open-source tools, even those that did not have code that made it into the product, required explicit approval. There's a big difference between GPL2 and GPL3, for example. In essence, it would be considered VERY BAD for an Intel developer to just grab the gfortran test suite and start using it without prior review and clearance.

To be honest, I don't understand this at all. Sounds like a very strange corporate culture to me. I was assuming that the reason that nobody at Intel had tried my little exercise before was a mere lack of manpower, but what you're describing sounds more like some sort of unhealthy ignorance (or an irrational phobia of anything open source). Maybe this is just my ignorance of  the fine points of GPL licensing, but I completely fail to see how any version of GPL prevents anyone from running "ifort test_case_from_gcc_repo.f90", analysing the result and fixing the compiler.

I assume that the user base, and thus testing coverage, of gfortran is significantly larger than that of ifort since quite a while now. That Intel is not willing to leverage such an enormous collection of publicly-available test cases, even if served on a gold plate, is downright negligent and simply not comprehensible. This strange attitude may partially explain the rather poor quality of recent ifort releases.

To illustrate this, I tried my script not only on the latest ifort release, but also on earlier versions, and found the following results:

ifort 18.0.1:   82% tests passed, 281 tests failed out of 1553
ifort 17.0.5:   78% tests passed, 344 tests failed out of 1553
ifort 16.0.4:   78% tests passed, 336 tests failed out of 1553
ifort 15.0.7:   76% tests passed, 367 tests failed out of 1553
ifort 14.0.4:   74% tests passed, 399 tests failed out of 1553
ifort 10.1:     51% tests passed, 761 tests failed out of 1553

One can mostly observe steady progress here, with one anomaly: ifort 17 actually shows more failures than the previous release. Comparing the failure lists for each release, one can see that the number of regressions from one release to the next is usually below ten. For ifort 17 however, I observe 41 (!) regressions over ifort 16, which is larger than the number of fixes seen (even on the fifth update, 17.0.5; the initial release must have been much worse).

I can only speculate whether this ifort 17 disaster correlates with the departure of Dr Fortran from the ifort team or is rather caused by a premature rush to a nominal F08 feature completeness (instead of focusing on the less glamorous areas of bugfixing and QA). All I can say is that a supposed F08 completeness is not worth anything if each feature is spiked with so many bugs that it's practically unusable. I certainly hope for an improvement of this situation over the course of the still young 18.x release series.

Cheers,

Janus

0 Kudos
Eugene_E_Intel
Employee
1,220 Views

Hi Janus,

Thank you for doing the work on CMake script.  We'll take a look at these tests.

--Eugene

 

0 Kudos
Janus
New Contributor I
1,220 Views

Thanks for the feedback, Eugene. Good to hear that someone is listening. (Note that after reporting four different issues with ifort 18, with zero fixes so far, I got a bit frustrated and figured that the above approach is a much more effective way of reporting bugs than spending countless hours on isolating compiler bugs seen on a large code base.)

Cheers,

Janus

0 Kudos
Janus
New Contributor I
1,220 Views

FYI, just found a small problem in my script that affected coarray tests with non-std file extensions. After fixing this in version 4 of the script, the result for ifort 18 changes very slightly to:

82% tests passed, 277 tests failed out of 1553

(from the 281 failures claimed earlier). Obviously that doesn't change the overall picture very much. In case anyone notices further problems with the script, please let me know.

Cheers,

Janus

 

0 Kudos
Steve_Lionel
Honored Contributor III
1,220 Views

It's not an "unhealthy phobia". It's a very real issue when inclusion, or sometimes use, of GPL code or even tools causes your own product to become GPL. Commercial software vendors don't like that. Intel has a process for making use of open-source and it needs to be followed.

0 Kudos
Janus
New Contributor I
1,220 Views

Steve Lionel (Ret.) wrote:

It's not an "unhealthy phobia". It's a very real issue when inclusion, or sometimes use, of GPL code or even tools causes your own product to become GPL. Commercial software vendors don't like that. Intel has a process for making use of open-source and it needs to be followed.

I'm well aware that there are all sorts of legal pitfalls for a commercial software company when it comes to using GPL'd software. I'm not a lawyer, but as a developer who has written both open-source as well as closed-source code, I think I have a certain level of understanding of a few such issues (probably not including all details and corner cases).

Concerning the test cases we're discussing, I can imagine that it might be problem for Intel to take these test cases (which are protected by the GPL) and include them in their private testing repository (non-GPL), possibly modifying the test case to fit Intel's testing infrastructure. In contrast to that, I fail to see how it would be a problem for an Intel engineer to simply 'use' such a test case (as 'input' into the commercial software, i.e. the ifort compiler) in order to discover a compiler bug. The GPL'd test case is not being linked to the commercial software, it's not becoming part of the commercial product in any way, it simply serves as 'input' (in the same sense as an, for example, image serves as input to some pattern recognition algorithm).

If there would be a legal problem with this, it seems like there would also be a problem with compiling *any* open-source code out there with the ifort compiler. I'm pretty sure that's not the case.

But quite possibly I'm just missing something. I'm not a lawyer after all. I'll be happy to be enlightened by someone who has a deeper understanding of all these legal issues.

Cheers,

Janus

 

0 Kudos
Steve_Lionel
Honored Contributor III
1,219 Views

All I'm saying is that Intel policy is to have such things reviewed by the legal team. I took the training enough times to remember that. The lawyers will have to review which of the various licenses the test suite falls under and determine what, if anything, the development team has to do in order to use it. It's not worth arguing about here - I'm simply telling you what Intel's rules on this are (or were as of a year or two ago.)

0 Kudos
Lorri_M_Intel
Employee
1,219 Views

The rules are still valid, and yes, we are trained/tested on when and where we can reference GPL software, and we are more than encouraged to err on the side of caution.   Case in point; let's say that I downloaded the gcc toolchain to get the Fortran tests.   That's all I wanted, and I pinky-swear that I never looked in the sources.   Next version of Intel Fortran comes out, and coincidentally, the generated code for some complicated feature *exactly  matches* some spiffy thing that gfortran did in that release.   Now I have to prove I didn't get the algorithm from the GPL'd sources.

There's a group who is "cleared" to access gcc downloads.   They download new versions of the gcc toolchain, and build the tools, keeping the sources away from the Intel compiler developers.   Intel has people who upload patches to gcc.  They're just not the same people doing the Intel compiler.

And, the gcc/gfortran test suites have been made available to the compiler developers, because as you say, there's nothing special/private/whatever in the test source.

We just don't have as recent a copy of the test suite as you have.

                  --Lorri

 

0 Kudos
Steve_Lionel
Honored Contributor III
1,219 Views

Intel is establishing that it has processes and procedures to prevent "contamination" of its own products. I see nothing amusing or wrong with this. The procedures and processes are deliberately easy to follow and don't get in the way of appropriate use of open-source software. They also DO go a long way to prevent inadvertent "borrowing" of GPL-licensed code. In addition to the separation, there is also code-scanning against a database of known OSS. (It irritated me no end that an example program *I* wrote for CVF got put into some OSS test suite, without attribution, and I had to explain each release that the match was ok.)

0 Kudos
Janus
New Contributor I
1,219 Views

I'm sorry guys, I really didn't mean to sound disrespectful, but somehow I feel like this discussion is drifting into the absurd, and I honestly don't know how to respond to that other than with humor and sarcasm. It was not my intention to offend anyone.

My original motivation was to make a contribution towards improving the quality of the ifort compiler, and it would be great if we could re-focus the discussion on that goal.

Cheers,

Janus

 

0 Kudos
aphirst
Beginner
1,219 Views

@Janus - Reading your comments, I'm reminded a lot of myself when I started on my 3-year work placement at a company with similar internal processes regarding the use of Free Software. Well, perhaps not as strict as Intel, but they were German ( ;) ). I would consider myself to come from a pretty firm "Free Software" background (with full disclosure: leaning much more strongly towards e.g. GPL3 than "permissive" licenses like MIT/BSD), so it took me a while to get accustomed.

As tempting as it might be, it's entirely fruitless to try to argue this on quite these terms - as Steve quite rightly points out, the processes exist for a reason (is not "making sure they don't infringe Free Software licenses" exactly what we want companies to do?); and while I certainly agree that it's impossible in principle to be sure that an ifort developer isn't familiar with the gfortran codebase, it's surely still Intel's responsibility to take whatever steps it can to reduce its liability.

Now, I'm just a spectator (and most certainly Not A Lawyertm), but for Intel to be able to use GCC's "set" of test cases, I only really see a handful of options:

  • GCC (represented by the GNU Consortium? is that right?) grants special license to the test cases, either
    • Publicly, under an appropriately-chosen license
    • Privately, to either
      • Intel directly, or
      • The Fortran Standards Committee
  • Intel goes through a lengthy (and expensive, because man hours = money) process to incorporate the GPL'd test cases into their development process, while ensuring no license violations (including but surely not limited to not including the tests in the released product) (Note: While it's one thing to claim that what you're doing is "clearly" non-violating, it's quite another to convince a legal team, given the very real consequences of being wrong.)
  • Someone else makes an extensive set of test cases entirely from scratch, and publishes these online under a permissive license. (Though it's not clear to me whether this is sufficient to convince Intel's legal team, since who's to vouch for the honesty of the "from scratch" part).

Maybe I missed something, but I just had a "flash" of thought, and felt compelled to chip in.

Regardless what happens, I really hope something comes of this.

0 Kudos
FortranFan
Honored Contributor II
1,219 Views

Janus wrote:

..  The first such category is regressions (cases that worked with earlier ifort versions, but fail with 18.1). Here's a list:

[plain]

22 - alloc_comp_assign_1.f90 (Failed)

[/plain]

 
This particular test from gfortran testsuite itself is questionable and requires review and validation.  My observation is it checks for the wrong thing and gives a false positive with that compiler.
 
Consider a reduced version of it:
   use, intrinsic :: iso_fortran_env, only : compiler_version

   type :: t
      character(len=1), allocatable :: c(:)
   end type

   type(t) :: x(3)
   type(t) :: y(3)

   print *, "Compiler Version: ", compiler_version()

   x(1)%c = [ "h","e","l","l","o" ]
   x(2)%c = [ "g","'","d","a","y" ]
   x(3)%c = [ "g","o","d","a","g" ]

   y(2:1:-1) = x(1:2)

   print *, "x(1)%c = ", x(1)%c, "; expected = ", [ "h","e","l","l","o" ]
   print *, "y(1)%c = ", y(1)%c, "; expected = ", [ "h","e","l","l","o" ]

   if (any (y(1)%c /= [ "h","e","l","l","o" ]) ) then
      print *, "Test failed."
   else
      print *, "Successful test."
   end if

   stop

end

Intel Fortran compiler 18.0 Update 1 works as expected with this case:

C:\Temp>type p.f90
   use, intrinsic :: iso_fortran_env, only : compiler_version

   type :: t
      character(len=1), allocatable :: c(:)
   end type

   type(t) :: x(3)
   type(t) :: y(3)

   print *, "Compiler Version: ", compiler_version()

   x(1)%c = [ "h","e","l","l","o" ]
   x(2)%c = [ "g","'","d","a","y" ]
   x(3)%c = [ "g","o","d","a","g" ]

   y(2:1:-1) = x(1:2)

   print *, "x(1)%c = ", x(1)%c, "; expected = ", [ "h","e","l","l","o" ]
   print *, "y(1)%c = ", y(1)%c, "; expected = ", [ "h","e","l","l","o" ]

   if (any (y(1)%c /= [ "h","e","l","l","o" ]) ) then
      print *, "Test failed."
   else
      print *, "Successful test."
   end if

   stop

end

C:\Temp>ifort /standard-semantics /warn:all /check:all p.f90 -o p.exe
Intel(R) Visual Fortran Intel(R) 64 Compiler for applications running on Intel(R
) 64, Version 18.0.1.156 Build 20171018
Copyright (C) 1985-2017 Intel Corporation.  All rights reserved.

Microsoft (R) Incremental Linker Version 14.12.25835.0
Copyright (C) Microsoft Corporation.  All rights reserved.

-out:p.exe
-subsystem:console
p.obj

C:\Temp>p.exe
 Compiler Version:
 Intel(R) Visual Fortran Intel(R) 64 Compiler for applications running on Intel(

 R) 64, Version 18.0.1.156 Build 20171018

 x(1)%c = hello; expected = hello
 y(1)%c = hello; expected = hello
 Successful test.

C:\Temp>

In addition to any other licensing issues, Intel Fortran team would need to thoroughly screen and vet any and all public content for accuracy and reliability i.e., do its own acceptance testing which can be very expensive.

I would rather hope Intel Fortran team focuses its efforts on the ISO IEC standard for Fortran and all the customer support incidents, many of whom have been intensely and extensively evaluated by Intel's own customers before submission at the OSC.

0 Kudos
FortranFan
Honored Contributor II
1,220 Views

Janus wrote:

..  The first such category is regressions (cases that worked with earlier ifort versions, but fail with 18.1). Here's a list:

[plain]

+    1469 - typebound_operator_15.f90 (Failed)

[/plain]

 
This particular test from gfortran testsuite is also questionable and requires review and validation.  It involves overriding a type-bound procedure with a PRIVATE attribute in an extension type which is part of another module; this is not permitted by the standard.  For the test to be standard-conforming, either the base type and the extended type should be part of the same module or the procedure in question must have PUBLIC attribute.  So Intel Fortran did not fail this test, rather Intel Fortran correctly follows the standard.
0 Kudos
FortranFan
Honored Contributor II
1,220 Views

Janus wrote:

..  The first such category is regressions (cases that worked with earlier ifort versions, but fail with 18.1). Here's a list:

[plain]

+    347 - class_to_type_4.f90 (Failed)

[/plain]

 
This particular test from gfortran testsuite is also questionable and requires review and validation.  The test also involves intrinsic assignment where the right-hand side evaluates to a dynamic type that is not compatible with the left-hand side.  In other words, the instruction used by the test that is of concern can be captured in a snippet like so:
   type :: t
   end type

   type, extends(t) :: e
   end type

   type(t) :: foo
   class(t), allocatable :: bar

   allocate( e :: bar )

   foo = bar

end

and Intel Fortran is correct in throwing a run-time exception with the assignment on line 12:

forrtl: severe (189): LHS and RHS of an assignment statement have incompatible t
ypes
Image              PC                Routine            Line        Source

p.exe              000000013FB98EE4  Unknown               Unknown  Unknown
p.exe              000000013FB9127A  MAIN__                     12  p.f90

 

0 Kudos
Juergen_R_R
Valued Contributor I
1,220 Views

No, the latter is actually correct and should _not_ give a runtime error. It works with all gfortran versions since 4.8 and also with nagfor 6.1 and 6.2. The standard says in the J3 document from 2010 in the introduction "Intrinsic assignment to an allocatable polymorphic variable is allowed."  And then in 7.2 the assignment statement is <variable> = <expr> 7.2.1.2.1 says: "if the variable is polymorphic it shall be allocatable and not a coarray" however, this doesn't apply, as foo is _not_ polymorphic, but has fixed type t. According to 7.2.1.2.4 the assignment is ok, because the _declared_ types of foo and bar are the same (t in both cases). After the assignment foo is still of non-polymorphic type t, but if you give it e.g. an integer component i that is 2 for foo and 3 for bar, then before the assignment foo%i is 2, but after the assignment it is 3. nagfor and gfortran (at least the later versions) get that, ifort 2018 gets this runtime error. ifort2017 actually got it correct. So this _is_ a regression. 

0 Kudos
FortranFan
Honored Contributor II
1,063 Views

Juergen R. wrote:

No, the latter is actually correct and should _not_ give a runtime error. .. So this _is_ a regression. 

With the simple code shown in Quote #20 and the instruction at line 12, the LHS in the assignment is of type t corresponding to variable foo whereas the expression on RHS evaluates to type e given the dynamic type of bar.  There is nothing in the standard that states this is allowed in an intrinsic assignment.

0 Kudos
Reply