Intel® Fortran Compiler
Build applications that can scale for the future with optimized code designed for Intel® Xeon® and compatible processors.
29276 Discussions

What is the difference between f77 and Intel Visual Fortran Compilers in terms of computation

mayank8281
Beginner
2,863 Views
Hi,

I am trying to compile and run a f77 fortran code through Intel Visual Fortran compiler. However, I am getting different results from the same input file from Intel Fortran compiler as compared to f77 compiler. Altough the relative error is small ,but What is the cause of this error?

I have read an article that f77 is a fixed form while the higher versions are free form and compiling a fixed form source code with free form compiler may give compilation errors. What is the difference between fixed form and free form? If there is compilation error, then how to fix it?
0 Kudos
22 Replies
Steven_L_Intel1
Employee
2,392 Views
If your program builds without errors, then source form is not an issue. You can read about source form in the Intel Fortran Language Reference. Intel Fortran supports fixed-form source for source files with .f or .for file types. In any event, computation differences would not be related to this.

You do not say which "f77 compiler" you are using or on which processor and OS. There are many possible reasons for computational differences, such as use of SSE instructions, different math libraries, optimizations causing reordering of instructions and bugs in the code. In general, you should not expect bit-for-bit sameness for floating point applications across different implementations.
0 Kudos
mayank8281
Beginner
2,392 Views
If your program builds without errors, then source form is not an issue. You can read about source form in the Intel Fortran Language Reference. Intel Fortran supports fixed-form source for source files with .f or .for file types. In any event, computation differences would not be related to this.

You do not say which "f77 compiler" you are using or on which processor and OS. There are many possible reasons for computational differences, such as use of SSE instructions, different math libraries, optimizations causing reordering of instructions and bugs in the code. In general, you should not expect bit-for-bit sameness for floating point applications across different implementations.


Hi Steve ,
thanks for your prompt reply.

The source code was written in 1964-66 and underwent several modifications.
I have to again modify the code as it showed some minor errors(when I compiled with intel visual fortran 10.1 compiler) like

(i) "the value was too small when converting to REAL; the result is in the denormalized range [1.e-38]": the issue was with the double precision format for the constant 1.e-38, which I changed to 1.d-38.

(ii) A branch to a do-term-shared-stmnt has occured from outside range of the corresponding inner-shared-do-construct [230] : I commented this part of code of a subroutine as this subroutine is not used in the program.

with these minor modifications the source code compiles without an error although I get some warning messages (on both the compilers) like :

fort: Warning: newray1.9.1a.f, line 893: Variable SPELAT is used before its value has been defined
ratlat = (spelat-oldlat)/(temlat-oldlat)

etc.

I am compiling my source code

(i) with f77 compiler on my university's server (alpha processor, Linux), and

(ii) with Intel Visual Fortan Compiler 10.1 on my PC (Intel Core2 Duo T8300, 2.4 GHz, Windows XP)



I compile the source code on intel compiler with optimization off (/Od command). I am not very sure what this does?
The number of output points generated with this compiler is more than that with f77 ! However the relative error of the output values from two compilers is small.

If both the compilers are compiling the same code with the same inputs then why the output differs in (i) no. of points generated , and (ii) the output values ?

Any suggestions will be very helpful.

Thanks
0 Kudos
jimdempseyatthecove
Honored Contributor III
2,392 Views

>>"the value was too small when converting to REAL; the result is in the denormalized range [1.e-38]": the issue was with the double precision format for the constant 1.e-38, which I changed to 1.d-38.

If your other compiler said 1.e-38 "phtth" and silently substituted with equivilent to TINY(0_4). Then expect the results to be different. Different numbers in, different numbers out.

Also, if you are performing Real_4_var = 1.d-38 then expect the same thing.

Jim Dempsey

0 Kudos
John4
Valued Contributor I
2,392 Views
Quoting - mayank8281

with these minor modifications the source code compiles without an error although I get some warning messages (on both the compilers) like :

fort: Warning: newray1.9.1a.f, line 893: Variable SPELAT is used before its value has been defined
ratlat = (spelat-oldlat)/(temlat-oldlat)

etc.

If SPELAT is used before its value has been defined, then the value stored in it can be anything. Since you are using different machines, then there is no guarantee that the value in SPELAT is always the same (unless you use some compiler flag you haven't mentioned yet). The initial value in SPELAT could just be spurious (i.e., changing randomly every time you compile/run under the same machine).

Also, your post suggests that the code compiles just fine in the f77 compiler, but gives some errors under ifort, so maybe the results you're obtaining with the f77 compiler depend on implementation bugs in this particular compiler.

0 Kudos
Steven_L_Intel1
Employee
2,392 Views
/Od means do not optimize. What happens if you leave that off?

Please add /QxT and see if the results are better using the Intel compiler. On the Alpha, you were using standard IEEE floating arithmetic, but with the Intel 10.1 compiler it uses the x87 floating point instructions which can introduce inconsstencies in results. Version 11 defaults to using SSE instructions that should be closer to the Alpha. /QxT tells the compiler to optimize for the Core 2 and use SSE instructions.
0 Kudos
TimP
Honored Contributor III
2,392 Views
/Od means do not optimize. What happens if you leave that off?

Please add /QxT and see if the results are better using the Intel compiler. On the Alpha, you were using standard IEEE floating arithmetic, but with the Intel 10.1 compiler it uses the x87 floating point instructions which can introduce inconsstencies in results. Version 11 defaults to using SSE instructions that should be closer to the Alpha. /QxT tells the compiler to optimize for the Core 2 and use SSE instructions.
In addition to specifying SSE, you should invoke one of the options to require IEEE arithmetic and observance of source code parentheses, such as /fp:source. If your Alpha comparisons are done with abrupt underflow, you would follow /fp:source with /Qftz.
0 Kudos
Steven_L_Intel1
Employee
2,392 Views
Quoting - tim18
In addition to specifying SSE, you should invoke one of the options to require IEEE arithmetic and observance of source code parentheses, such as /fp:source. If your Alpha comparisons are done with abrupt underflow, you would follow /fp:source with /Qftz.
To better match Alpha, I would recommend /fpe:0 rather than /Qftz.
0 Kudos
mayank8281
Beginner
2,392 Views
Hi,

Thanks to everyone who replied to my query.

I tried with all the options (/QxT, /fp:source etc.) but still i am getting difference in results. Are the results different because the source code is being compiled by different compilers. I compiled the same source code with f77 and f95 on the same Alpha processor and I am getting different results!

I compared the results of Intel compiler 10.1(on my PC, intel core2 duo T8300, 2.4GHz) with f95(on Alpha processor) with the same source code and they are identical!. It means that the computing on my PC and Alpha processor is not making any difference. Is this difference due to compilers f77 and intel 10.1??

Note: The source code is very old (it was written in 1964-1966 ! :) ) and was modfied later many times.
0 Kudos
TimP
Honored Contributor III
2,392 Views
I think you have verified that the difference is associated with the instruction sets you have chosen. Your old compiler for PC is limited to the x87 instruction set, where all expressions are evaluated in double or double extended precision. That is not a question of f77 vs f90, except that the f95 standard was in effect by the time the new PC instruction sets were introduced, so there were relatively few compilers which didn't support f90/f95 but did support SSE. g77 was one which is still available, but not maintained.
Compaq Fortran (f95) was restricted to x87 instructions, but supported f95 very well.
Steven Lionel is the expert on Alpha around here, but even he would likely need more information from you to begin to answer your question about how you were able to change your results by going from f77 to f95 on Alpha. Among the possibilities is that f77 permitted compilers to use double precision constants implicitly, even when the source declared them as single. Later compilers require special options to do that, as it is contrary to the more recent standards. It was not allowed by the f66 standard, but many compilers probably didn't enforce it.
Even though your code development was begun prior to the f66 standard, changes in the Fortran standard would not explain differences in results. There was no hardware floating point standard prior to development of machines like Alpha, so the code should not have been developed so as to require a specific floating point behavior, as your observations indicate it does.
0 Kudos
Steven_L_Intel1
Employee
2,392 Views
I think Tim has got it. The "F77" compiler on Alpha (and not the F90 compiler) did give you larger precision for real constants than the standard specified. You can use /assume:fpconstant to get that behavior. If you do something like this:

double_variable = 3.1415926535897

and assume that all the digits in the constant will be used, the best thing is to make those constants double precision by adding "D0" at the end, or use /assume:fpconstant.
0 Kudos
gib
New Contributor II
2,392 Views
Quoting - mayank8281
Hi,

Thanks to everyone who replied to my query.

I tried with all the options (/QxT, /fp:source etc.) but still i am getting difference in results. Are the results different because the source code is being compiled by different compilers. I compiled the same source code with f77 and f95 on the same Alpha processor and I am getting different results!

I compared the results of Intel compiler 10.1(on my PC, intel core2 duo T8300, 2.4GHz) with f95(on Alpha processor) with the same source code and they are identical!. It means that the computing on my PC and Alpha processor is not making any difference. Is this difference due to compilers f77 and intel 10.1??

Note: The source code is very old (it was written in 1964-1966 ! :) ) and was modfied later many times.
I don't think you've said how much your results have changed. If the change is significant, and if it is the result of the sort of real(4) vs real(8) or fp implementation issues that have been mentioned, I would be concerned about the robustness of the code. Generally speaking, you don't want your results to be very sensitive to small changes (that way chaos lies!).

Steve: I can edit this post, but not the one on the IVF vs CVF thread. Strange!

Edit: Ah, thanks!
0 Kudos
Steven_L_Intel1
Employee
2,392 Views
gib, the editing thing, when it was fixed a while back. applied to new threads created after a certain point.
0 Kudos
mayank8281
Beginner
2,392 Views
Quoting - gib
I don't think you've said how much your results have changed. If the change is significant, and if it is the result of the sort of real(4) vs real(8) or fp implementation issues that have been mentioned, I would be concerned about the robustness of the code. Generally speaking, you don't want your results to be very sensitive to small changes (that way chaos lies!).

Steve: I can edit this post, but not the one on the IVF vs CVF thread. Strange!

Edit: Ah, thanks!
Hi everyone,

Thanks a lot for your help.


I have changed all the constants in the code to double precision (D0) as suggested. Changing the constants to double precision explicitly has infact changed the results(they are more accurate than before), but still the results of f77 compiler are not identical with the results of intel 10.1 compiler on my PC. Yeah the code is sensitive to small changes. I encountered a very different problem while debugging the code.

The code has a sub routine (say 'A' ) which is called from the main program. there is a variable 'S'(defined implicitly as double precision, not a common variable) which is initialized with value 0.0 at the start of main program. It takes value either 1.0 or 0.0 in the subroutine A based on a if() condition. when if() condition is met , S is assigned 1.0 and the control returns to main program where it prints some variables and the again calls the subroutine A. Now here is the problem:

With f77 compiler on Alpha processor:

when the control returns to main program from A after S is assigned value 1.0 in the A, the value of S is 0.0 and not 1.0 in the main program. Now when the subroutine is again called , before calling, the value of S is 0.0 in main program but it automatically takes value 1.0 after reaching the subroutine A.


With intel compiler on core2 duo processor on my PC:

when the control returns to main program from A after S is assigned value 1.0 in the A, the value of S is 0.0 and not 1.0 in the main program. Now when the subroutine is again called , before calling the value of S is 0.0 in main program but now it does not take value 1.0 after reaching the subroutine A as in the case of f77, instead it has value 0.0.

This was creating problem with the functioning of my program. I then replaced S by iS(implicit integer) and defined it common variable and this bug is cleared. But I am very curious to know what was happening when the control was transferred from suroutine A to main and back. Why in one case the value of S changed from 1.0 to 0.0 and again 1.0 while in other case it did not?


There is another problem. In the main program after the statement : call A (calling subroutine A as mentioned above), I am printing the values of some variables through print* statement (say a,b,c. These variables are conditional variables).

When I do this the values of some variables (say r,p) calculated from f77 and intel compiler do not match even upto 3rd decimal place but when I comment this statement, the values match upto 12 decimal places!!. My question is ,

(1) Can print* statement change values of any variable(especially r,p) (note : all the variables are defined as double precision)?

It is also worth mentioning here that the code uses Runge-Kutta method and Predictor-Corrector method to calculate the value of (r,p).

Any suggestions would be very helpful.


0 Kudos
TimP
Honored Contributor III
2,392 Views
If a print statement changes the results significantly, the usual implication, as you've already been reminded, is that you didn't initialize a variable, or you have an array bounds violation.
You've been reminded also, there did exist f77 compilers which had implicit SAVE, but that also was never a standard Fortran language feature, except that F77 didn't require an option be available to diagnose non-standard usage. Code which is wrong because of expecting implicit SAVE is not reliably diagnosable anyway. Implicit SAVE doesn't guarantee any particular initial value, but it might have always given the same result until you changed platform.
If you are getting confused over implicit data typing, you've validated the standard advice to use IMPLICIT NONE to force yourself to avoid that problem. That was available as an extension in most F77 compilers.
With so many words, I didn't see whether you ever agreed to try /check.
0 Kudos
Steven_L_Intel1
Employee
2,392 Views
If you are comparing a value against exactly 0.0 or 1.0, you may find that the value is very close to 0.0 or 1.0 but not quite. This will cause equality comparisons to fail and, should you convert a not-quite-1.0 to integer you'll get 0. I don't know what your code wants to do, but perhaps comparing with a tolerance or using NINT would help.

I also agree that if a PRINT changes results then you almost certainly have a bug in your code.
0 Kudos
mayank8281
Beginner
2,392 Views
Quoting - tim18
If a print statement changes the results significantly, the usual implication, as you've already been reminded, is that you didn't initialize a variable, or you have an array bounds violation.
You've been reminded also, there did exist f77 compilers which had implicit SAVE, but that also was never a standard Fortran language feature, except that F77 didn't require an option be available to diagnose non-standard usage. Code which is wrong because of expecting implicit SAVE is not reliably diagnosable anyway. Implicit SAVE doesn't guarantee any particular initial value, but it might have always given the same result until you changed platform.
If you are getting confused over implicit data typing, you've validated the standard advice to use IMPLICIT NONE to force yourself to avoid that problem. That was available as an extension in most F77 compilers.
With so many words, I didn't see whether you ever agreed to try /check.
Hi Tim,
Can you please eleborate more on it so that I can understand it better.

Thank you
0 Kudos
TimP
Honored Contributor III
2,392 Views
Quoting - mayank8281
Hi Tim,
Can you please eleborate more on it so that I can understand it better.

IMPLICIT NONE is explained well in several web references, e.g.
http://publib.boulder.ibm.com/infocenter/comphelp/v8v101/index.jsp?topic=/com.ibm.xlf101a.doc/xlflr/implict.htm
http://www.personal.psu.edu/users/j/h/jhm/f90/statements/implicit.html
This is generally considered to remove opportunities for mistakes, as well as making the typing rules resemble other programming languages.

SAVE is more often explained in F77 manuals e.g.
http://docs.hp.com/cgi-bin/doc3k/B3150190022.12120/34
DEC f77 compilers often acted as if SAVE were specified in each subroutine and function. SAVE by itself makes all local variables persist from the previous invocation of the subroutine (although compilers have been known to fail in this respect). It may disable some optimizations, and will prevent parallelization. Contrary to the belief of many who used DEC compilers, default SAVE was not common among other f77 compilers; rather, it was a hold over from the practice of many f66 compilers.
A consequence of the default SAVE behavior, at least in the VAX Fortran, was that some loop optimizations would be disabled if you re-used the same local variable name for a separate purpose.
You might put SAVE in each subroutine, to see if it helps give consistent results, then find out which subroutines actually require it, and which variables (by naming them in the SAVE, thus implicitly removing the others from SAVE).
DEC F77 probably treated DATA the same way as f90, with SAVE implicit for the variables in the DATA. Optimizations would not affect the interpretation of DATA.

The syntax is explained in the documentation in the ifort installation directory as well.
0 Kudos
Steven_L_Intel1
Employee
2,392 Views
Tim's explanation is good. I'll just add that DATA (or an initialization clause in the variable declaration) implying SAVE is now part of the Fortran standard.
0 Kudos
mayank8281
Beginner
2,392 Views
Hi Steve,

I am working on Tim's and your suggestions.

I have some interesting observation while debuggung my code.
I found that the values of a trignometric function (datan) calculated on my PC(Intel FORTRAN 10.1) and Alpha(f77) are different. function datan2 give different results on two computers and found that the values of two systems are accurate only upto 14th to 15th decimal places even though I have defined all the variables, constants and functions as double precision.

I want to know how and why these values are different ? Is this difference because of the compiler?
0 Kudos
TimP
Honored Contributor III
2,233 Views
The compiler can take direct responsibility for atan2 only in x87 /arch:ia32 mode, where the firmware intrinsics in the CPU can be used. These employ 11 bits extra precision, so should be capable of good accuracy when rounded to double precision.
The svml functions aren't expected to be as accurate. As the svml library comes with Intel compilers, you could impute some responsibiilty to the compiler.
You might get more insight into this question by running a test suite, such as ELEFUNT from netlib.org.
0 Kudos
Reply