Intel® Fortran Compiler
Build applications that can scale for the future with optimized code designed for Intel® Xeon® and compatible processors.
28446 Discussions

Change in floating point rounding between Versions 11 and 12 of Fortran compiler

Michael_D_11
Beginner
1,216 Views
I have recently noted a rather minor discrepancy in a calculation in one of our codes. In the code we are calculating the cube root (by exponentiation to the 1./3.) of a number, 1500. Between compiler version 11.1.065 and 12.1.0.233 the result of this calculation has changed from 11.44714355 (0x41372780) to 11.44714260 (0x4137277F), or a change in the last bit of the binary mantissa. The later value is clearly the more precise binary representation, but the difference in results using different compilers (with same floating point settings) is leading to noticeable difference in model predictions.

Was a change made in how exponentiation is handled between the two compilers? Was intermediate rounding changed (hence the 1/3 exponent is different)?
0 Kudos
10 Replies
Steven_L_Intel1
Employee
1,216 Views
The ultimate goal of the math library is to produce the "correctly rounded infinite precision result". We are constantly looking for places we can improve results where this goal is not met. You evidently found one where we made an improvement.

In general, there are many factors that can lead to small differences in floating point results. Some are as simple as math library improvements, but others can be more subtle, such as rearranging operations for optimization, use of vectorization, etc. If these cause "noticeable differences" in your application's results, it is perhaps using an unstable algorithm or is peculiarly sensitive to last-bit differences. It's something you have to expect when changing anything about the environment, including different compiler versions or optimization option changes.

There is no guarantee of bit-for-bit sameness of floating point computations. I will also comment that as you are using single precision, you should not expect more than 7 decimal significant digits. You're reporting a change in the 8th decimal digit. Perhaps you will want to do sensitive calculations in double precision.
0 Kudos
Michael_D_11
Beginner
1,216 Views
I know I should not expect more than 7 decimal digits, but the different compilers are giving different binary answers (by 1 bit) and the values I provided in my initial post are the "exact" decimal representations of the binary answers. Yes, I know i have a problem with sensitivity in downstream code, but that isn't a problem I can readily address at this time. I was hoping some combination of setting could result in different versions of the Intel fortran compiler (v11 and v12) yielding the same result for a given calculation. It appears from your answer that this is unlikely to be achievable.

To me it appears that between v11 and v12 of the compiler some change was made to intermediate rounding such that the following code gives answers that differ by one binary bit.

[bash] real b real ans_b b= 1500. ans_b= b**(1./3.)[/bash] I understand the need for higher precision for sensitive calculations, but I guess I naively assumed that a rather straight-forward calculation, one with no possibilities for associative or distributive reordering, would give consistent, if inexact, results.
0 Kudos
Steven_L_Intel1
Employee
1,216 Views
It's the exponentiation operator that became more accurate.
0 Kudos
Michael_D_11
Beginner
1,216 Views
Thank you for the prompt reply Steve. One last question; was this improvement in accuracy done in the compiler or in the run-time math libraries? From what I can tell it seems to be in the compiler as I don't notice difference going from the version 11 to the version 12 library dlls.
0 Kudos
Steven_L_Intel1
Employee
1,216 Views
The operation is done in the run-time library. The unoptimized version in both cases calls _powf, while the optimized version calls __libm_sse2_powf.
0 Kudos
Michael_D_11
Beginner
1,216 Views
Steve,

I am not using the optimized sse2 functions. Using dependancy walker it seems the Version 12.1 compiler is using the the cbrtf function (in libmmd.dll) while the Version 11 compiler is using _powf. Would this explain the differences? If so, is it possible to set a compiler setting to force one implementation over the other?
0 Kudos
Steven_L_Intel1
Employee
1,216 Views
Strange - I could have sworn that I saw it use something called libm_sse2_powf. Anyway, I see that by default it calls libm_sse2_cbtrf. No, you can't force it to call _powf.

I understand the pain it causes when floating point results change, even when the new results are better. But that's the reality of doing floating point computations, and expecting bit-for-bit sameness when the environment changes is unrealistic.
0 Kudos
TimP
Honored Contributor III
1,216 Views
In a case I worked on where the compiler recognized opportunity to substitute svml cbrt(), -fp-model source would prevent that substitution. Also, -imf-arch-consistency=true is intended to switch math library calls to a version of the library which minimizes architecture dependencies rather than emphasizing speed on specific architectures.
In the example you presented, I would expect certain compilers to make the most accurate possible evaluation at compile time, so I'm reluctant to assume such an example represents behavior of a practical application.
0 Kudos
Michael_D_11
Beginner
1,216 Views
Thank you all for the help and advice. It looks like I have to live with the fact the inconsistent adoption of compiler versions on our project will result in minor changes in results.

By the way, using compiler version 12.1, both the option /fp:source and /fp:strict result in the compiler using the more accurate cbrt call. An interesting finding though was that if the compiler could pre-compute a cube-root (such as the situation a**(1./3.), where a was defined as a parameter) it would use the less accurate powf if /fp:source was used.
0 Kudos
SergeyKostrov
Valued Contributor II
1,216 Views
Thank you all for the help and advice. It looks like I have to live with the fact the inconsistent adoption of compiler versions on our project will result in minor changes in results.
...


Hi,

I wouldn't give up until I check aFloating Point Unit's ( FPU )Control Word in both cases.

Could you call a '_control87' CRT-function from the IVF? For example, in C/C++ it is called like:

...
UINT uiControlWordx87 =_control87( _PC_53, _MCW_PC );
...

If in both cases FPU's Control Words are different than FPUsinitialized differently. I could assume in that
case that a change in aRounding Controlwas made andpossiblyrelated to _RC_NEAR, _RC_CHOP,
_RC_DOWN or_RC_UP constants.

Best regards,
Sergey

0 Kudos
Reply