Intel® Fortran Compiler
Build applications that can scale for the future with optimized code designed for Intel® Xeon® and compatible processors.
28381 Discussions

Different results for different processors?

Joachim_Herb
Beginner
1,019 Views

Hello,

I have a (new) problem, which I do not understand. My code reproduces different results on different computers (with different (Intel) CPU generations). I created a small test program which reproduces the effect (actually, the result in this case is just off by one bit):

    program Console3

    implicit none

    ! Variables

    real(8) :: phi, sinphi

    phi = z'400251718A7B6EA8'
    
    sinphi = dsin(phi)
    
    ! Body of Console3
    print *, 'phi'
    print *, phi
    write (*, 2000), phi
    print *, 'sin phi:'
    print *, sinphi
    write (*, 2000), sinphi
 2000 format(Z)
    end program Console3

On my newer computer (Intel(R) Core(TM) i7-6700), the program out is:

$ x64\Debug\Console3.exe
 phi
   2.28976734341798
       400251718A7B6EA8
 sin phi:
  0.752483826879143
       3FE81458F666E005

On the older computer (Intel(R) Xeon(R) CPU E5-2430 0), the output is:

>x64\Debug\Console3.exe
 phi
   2.28976734341798
       400251718A7B6EA8
 sin phi:
  0.752483826879143
       3FE81458F666E006

Both are running Windows 7. The program was compiled with the standard settings in Visual Studio. The result stays the same on each computer, independent of the compiler version (16.0.4, 17.0.2) or the libmmd.dll version (16.0.4, 17.0.2).

If I compile the program on Linux (Intel(R) Xeon(R) CPU E5-2680), I get the results of the "old" Windows Xeon computer:

> ifort --version
ifort (IFORT) 17.0.2 20170213
Copyright (C) 1985-2017 Intel Corporation.  All rights reserved.

> ifort -o test Console3.f90

> ./test
 phi
   2.28976734341798
       400251718A7B6EA8
 sin phi:
  0.752483826879143
       3FE81458F666E006

If I step through the disassembler code, it is different on the two Windows machines.

So my question is: Is this expected behavior? Can I force the new computer to use the "older" code? Or did I screw up the library paths and are now using a complete different math library on one of the two computers? If the latter, is there a way to see at runtime, which library (version/path) the program is using?

Thank you for any help

Joachim

 

 

 

0 Kudos
7 Replies
andrew_4619
Honored Contributor II
1,019 Views

https://software.intel.com/en-us/articles/consistency-of-floating-point-results-using-the-intel-compiler

different machines , different code paths, optimisations etc, etc. it isn't any surprise. So which one is "more" correct and why does it matter?

 

0 Kudos
Joachim_Herb
Beginner
1,019 Views

@andrew_4619: Thank you for your answer. No one is more correct. Unfortunately, these differences seem to add up and result is some visible differences in the overall output of the (larger) code.

Before looking deeper into this, I just would like to make sure, that is is not just a misconfiguration of library paths.

0 Kudos
JVanB
Valued Contributor II
1,019 Views

Just for fun, I tried checking the quad-precision results:

program P
   use ISO_FORTRAN_ENV
   implicit none
   real(REAL64) phi
   real(REAL128) qphi, sinphi
   integer(INT64) boz(2)
   phi = real(Z'400251718A7B6EA8',real64)
   qphi = phi
   sinphi = sin(qphi)
   boz = transfer(sinphi,boz)
   boz(2) = boz(2)-int(Z'3FFF000000000000',INT64)+ &
      int(Z'03FF000000000000',INT64)
   boz(2) = dshiftl(boz(2),boz(1),4)
   boz(1) = ishft(boz(1),4)
   write(*,'(2(Z16.16:1x))') boz(2:1:-1)
end program P

ifort printed out:

3FE81458F666E005 7EBB516BD8CA5850

So the newer results are slightly more accurate than the old. Such inconsistent results are common when computing results that are very close to 0.5 ulps from the nearest representable number, here the exact result is about 0.495 ulps above the correctly rounded result. It's very expensive to get results guaranteed to within ±0.5 ulps. The difference could be due to a more accurate value of π used for argument reduction or use of FMA operations in function evaluation or a somewhat different algorithm.

0 Kudos
Joachim_Herb
Beginner
1,019 Views

Thank you for all your answers.

Now things get even more weird: The above results (with the different results) were produced when compiling with Visual Studio 2015. On one computer (the older one) it is installed, the newer one only has the runtime libraries of this Visual Studio (2015) version.

The newer computer has only the Visual Studio Shell installed with the Fortran Compiler (of Visual Studio 2013). The older one did not have the runtime library.

Now I have installed the runtime of Visual Studio 2013 also on the older computer (ok confusing: older computer but newer Visual Studio version).

Suddenly the results changes: If I compile the program with Visual Studio 2013 on the new computer the same program binary gives the same result on both computers:

Run on old:

 phi
   2.28976734341798
       400251718A7B6EA8
 sin phi:
  0.752483826879143
       3FE81458F666E005

Run on new

 phi
   2.28976734341798
       400251718A7B6EA8
 sin phi:
  0.752483826879143
       3FE81458F666E005

If I compile it on the old computer (with Visual Studio 2015):

Run on old:

 phi
   2.28976734341798
       400251718A7B6EA8
 sin phi:
  0.752483826879143
       3FE81458F666E006

Run on new:

 phi
   2.28976734341798
       400251718A7B6EA8
 sin phi:
  0.752483826879143
       3FE81458F666E006

These results were all compiled "Release". With "Debug", I get this on the new on:

 phi
   2.28976734341798
       400251718A7B6EA8
 sin phi:
  0.752483826879143
       3FE81458F666E005

The version of ifort is the same on both computers (16.0.4.246):

Regardless of what is "correct": Why does the Visual Studio runtime library affect the results? It is only Fortran code. Shoulded the sin function be used from the libmd.dll?

(For my first tests, I compiled everything with the Visual Studio 2015 on the old compiler, because both computers had the runtime libraries for that version).

So would I would be really interested in: Is there way to determine, which libraries are used? (I know dependency walker) Ideally, how can the program determine this at runtime and add it to its output? Are there anything like module variables, which can be read out during runtime? I have seen some pages about MKL_VERBOSE (https://software.intel.com/en-us/articles/verbose-mode-supported-in-intel-mkl-112) but this seems to not work (is it still available in current versions?)

Thank you again for any help.

0 Kudos
andrew_4619
Honored Contributor II
1,019 Views

What compiler options are you using? Quoting from the article I linked in #2:

Compiler options let you control the tradeoffs between accuracy, reproducibility and performance. Use

/fp:precise /fp:source (Windows*) or
-fp-model precise -fp-model source (Linux* or OS X*)

to improve the consistency and reproducibility of floating-point results while limiting the impact on performance.
If reproducibility between different processor types of the same architecture is important, use also
 

/Qimf-arch-consistency:true (Windows) or
-fimf-arch-consistency=true (Linux or OS X)

For best reproducibility between processors that support FMA instructions and processors that do not, use also /Qfma- (Windows)       or -no-fma (Linux or OS X). In the version 17 compiler, best reproducibility may be obtained with the single switch /fp:consistent (Windows) or -fp-model consistent (Linux or OS X), which sets all of the above options.

0 Kudos
andrew_4619
Honored Contributor II
1,019 Views

In your real application are differences in results real world significant? If so then are you using options to give best consistency/accurracy? Failing that your solution method/algorithms must then be examined.

0 Kudos
Joachim_Herb
Beginner
1,019 Views

You are, of course, right. But before looking into the numerics, I would like to make sure that I understand, what causes my problems, especially, if it is related to software installation. The latter should be fixable much more easily than working on the internals of the solver.

andrew_4619 wrote:

In your real application are differences in results real world significant? If so then are you using options to give best consistency/accurracy? Failing that your solution method/algorithms must then be examined.

0 Kudos
Reply