Intel® Fortran Compiler
Build applications that can scale for the future with optimized code designed for Intel® Xeon® and compatible processors.

I/O Bug? (32 vs 64-bit Compile)

Thomas_F_1
Beginner
1,106 Views

I'm curious if others would consider the following to be a bug.

Given the following code:

program test845887

    character(12) :: input
    real(8) :: value
    
    input = ' 8.45887E-01'
    read(input,"(d12.5)") value
    
    write(*,"(es24.16)") value

end program

The output is different depending on whether it is compiled for 32 or 64 bit architectures:

tom$ ifort -m32 read32or64.f90 
tom$ ./a.out
  8.4588700000000006E-01
tom$ ifort -m64 read32or64.f90 
tom$ ./a.out
  8.4588699999999994E-01

tom$ ifort -V
Intel(R) Fortran Intel(R) 64 Compiler XE for applications running on Intel(R) 64, Version 15.0.7.234 Build 20160519

Normally when I see differences that depend on addressing, I suspect a bug in my code. But after 2 days of tracing through my code, it turns out to be an I/O issue.

Thoughts?

0 Kudos
11 Replies
Steve_Lionel
Honored Contributor III
1,106 Views

Bug? Probably not. Different code for converting decimal to binary and the reverse. The question would be which answer is closer to the correctly rounded result, and is the difference in the read or the write? I am away from a Fortran-capable device so I can't test this myself. At most there's only one LSB difference.

0 Kudos
Thomas_F_1
Beginner
1,106 Views

Steve,

Thanks for your reply. To clarify a little further:

  • The issue is with the read statement. I made this sample after encountering the problem in a much larger code.
  • Shouldn't IEEE standards prevail? There should be no ambiguity on the conversion (assuming the same IEEE flags and rounding modes).
  • This make a small difference in when fuel melts in our reactor analysis code. Well within uncertainty, but it makes validation a problem when numerical differences occur due to memory addressing.

Thanks!

0 Kudos
Jeff_Arnold
Beginner
1,106 Views

Print the variable using hexadecimal format and compare the results on the two platforms. If the hex value is the same on both (i.e.,both compilers are converting the decimal string in the source to the same binary value), the difference is because of the different binary->decimal conversion routines used on the two platforms.

0 Kudos
FortranFan
Honored Contributor II
1,106 Views

@Thomas F.,

Your problem is not reproducible on WIndows OS with Intel Fortran compiler 18.0 BETA:

program test845887

   use, intrinsic :: iso_fortran_env, only : compiler_version

   character(12) :: input
   real(8) :: value

   write(*,*) "Compiler Version: ", compiler_version()

   input = ' 8.45887E-01'
   read(input,"(d12.5)") value

   write(*,"(es24.16)") value
   write(*,"(z0)") value

end program
C:\Fortran>ifort /Qm32 p.f90
Intel(R) Visual Fortran Intel(R) 64 Compiler for applications running on IA-32,
Version 18.0.0.065 Beta Build 20170320
Copyright (C) 1985-2017 Intel Corporation.  All rights reserved.

Microsoft (R) Incremental Linker Version 14.00.24215.1
Copyright (C) Microsoft Corporation.  All rights reserved.

-out:p.exe
-subsystem:console
p.obj

C:\Fortran>p.exe
 Compiler Version:
 Intel(R) Visual Fortran Intel(R) 64 Compiler for applications running on IA-32,

  Version 18.0.0.065 Beta Build 20170320

  8.4588699999999994E-01
3FEB11819D2391D5

C:\Fortran>ifort /Qm64 p.f90
Intel(R) Visual Fortran Intel(R) 64 Compiler for applications running on Intel(R
) 64, Version 18.0.0.065 Beta Build 20170320
Copyright (C) 1985-2017 Intel Corporation.  All rights reserved.

Microsoft (R) Incremental Linker Version 14.00.24215.1
Copyright (C) Microsoft Corporation.  All rights reserved.

-out:p.exe
-subsystem:console
p.obj

C:\Fortran>p.exe
 Compiler Version:
 Intel(R) Visual Fortran Intel(R) 64 Compiler for applications running on Intel(

 R) 64, Version 18.0.0.065 Beta Build 20170320

  8.4588699999999994E-01
3FEB11819D2391D5

 

0 Kudos
Thomas_F_1
Beginner
1,106 Views

Here is more detailed testing. The problem only occurs with read statements on macOS and Linux 32-bit platforms. In those cases, the value is being rounded up in the LSB. According to this site, the hex value should be 0x3FEB11819D2391D5.

$ cat read32or64.f90 

program test845887

    character(12) :: input
    real(8) :: value
    
    input = ' 8.45887E-01'
    read(input,"(d12.5)") value
    
    write(*,"(es24.16)") value
    write(*,"('0x',z16)") value

end program

On macOS

$ ifort -V
Intel(R) Fortran Intel(R) 64 Compiler XE for applications running on Intel(R) 64, Version 15.0.7.234 Build 20160519

$ ifort -m32 read32or64.f90 
$ ./a.out
  8.4588700000000006E-01
0x3FEB11819D2391D6  <--- rounded UP

$ ifort -m64 read32or64.f90 
$ ./a.out
  8.4588699999999994E-01
0x3FEB11819D2391D5


On Linux

$ ifort -V
Intel(R) Fortran Intel(R) 64 Compiler XE for applications running on Intel(R) 64, Version 15.0.5.223 Build 20150805

$ ifort -m32 read32or64.f90 
$ ./a.out
  8.4588700000000006E-01
0x3FEB11819D2391D6  <--- rounded UP

$ ifort -m64 read32or64.f90 
$ ./a.out
  8.4588699999999994E-01
0x3FEB11819D2391D5


On Windows

$ ifort -V
Intel(R) Visual Fortran Compiler XE for applications running on IA-32, Version 15.0.5.280 Build 20150805

$ ifort /nologo /Qlocation,link,"${VCINSTALLDIR}/bin" read32or64.f90
$ ./read32or64.exe
  8.4588699999999994E-01
0x3FEB11819D2391D5

$ ifort -V
Intel(R) Visual Fortran Intel(R) 64 Compiler XE for applications running on Intel(R) 64, Version 15.0.5.280 Build 20150805

$ ifort /nologo /Qlocation,link,"${VCINSTALLDIR}/bin" read32or64.f90
$ ./read32or64.exe
  8.4588699999999994E-01
0x3FEB11819D2391D5

0 Kudos
jimdempseyatthecove
Honored Contributor III
1,105 Views

>>This make a small difference in when fuel melts in our reactor analysis code. Well within uncertainty, but it makes validation a problem when numerical differences occur due to memory addressing.

Have you considered that, when (if) the read input from a file (with conversion of text to internal FP format) results in 1 lsb difference, and your analysis code "blows up", then this is a strong indication that the analysis code is potentially in error in that it is just as likely to .NOT. "blow up" when it should. I suggest you look at all your convergence code and fix incorrect (too sensitive) assumptions that have been made.

Also, production of exactly same results may be problematic when a program is optimized. Adding SIMD vectorization, vector reduction, parallelization, new instruction sequences (FMA), common sub-expression elimination and/or reordering instruction sequences.

Jim Dempsey

0 Kudos
Thomas_F_1
Beginner
1,106 Views

>> then this is a strong indication that the analysis code is potentially in error in that it is just as likely to .NOT. "blow up" when it should

Our code does not "blow up" just because fuel is melting. I am merely comparing 32- vs. 64-bit builds and noting a small difference when there should be none. I know which level of optimization is safe and which aren't and our entire regression suite passes just fine except for this one.

In the original post I noted:

>> Normally when I see differences that depend on addressing, I suspect a bug in my code. But after 2 days of tracing through my code, it turns out to be an I/O issue

So I understand your comment. But the IEEE-754 standard makes it clear how decimal to binary conversion should be done. This case suggests that the macOS and Linux 32-bit runtime libraries are not doing it correctly. Therefore I assert it is a bug on Intel's side, regardless of what our code is doing.

Thanks.

0 Kudos
Kevin_D_Intel
Employee
1,106 Views

@Thomas F. - I submitted this Development for further analysis. I reproduced this on at least Linux IA-32 only and even with our 18.0 Beta compiler.

(Internal tracking id: CMPLRS-43290)

0 Kudos
mecej4
Honored Contributor III
1,106 Views

Kevin, here is a test program that prints out many numbers for which the decimal-to-binary input conversion routine in the 32-bit IFort runtime on Linux gives results that are off in the LSB. Unlike the code examples given so far in this thread, it avoids the question of decimal-to-binary conversion at compile time versus conversion at run time.

program tieee
implicit none
integer :: ix,j
double precision :: x, y, million = 1d6
character(8) :: num = '0.000000'
!
do ix=0,999999
   write(num(3:),'(I6)')ix
   do j=3,8
      if(num(j:j)==' ')num(j:j)='0'
   end do
   x = ix/million
   read(num,'(F8.6)')y
   if(x /= y)write(*,10)num,x,y
end do
10 format(1x,A8,2x,Z16,1x,Z16)
end

I tested this program with the 17.0.2 compiler (IA-32) on openSuse 13, but the problem is probably present in earlier Ifort versions on Linux-32 as well.

The program does not produce any output on Windows with IFort, and on Linux and Windows with Gfortran 6.3.

 

0 Kudos
Kevin_D_Intel
Employee
1,106 Views

Thank you. I added this to the internal tracking record.

0 Kudos
Thomas_F_1
Beginner
1,106 Views

Running this as a 32-bit executable on macOS results in 256 lines of output (i.e. discrepancies). In 64-bit, no output is produced.

mecej4 wrote:

Kevin, here is a test program that prints out many numbers for which the decimal-to-binary input conversion routine in the 32-bit IFort runtime on Linux gives results that are off in the LSB. Unlike the code examples given so far in this thread, it avoids the question of decimal-to-binary conversion at compile time versus conversion at run time.

program tieee
implicit none
integer :: ix,j
double precision :: x, y, million = 1d6
character(8) :: num = '0.000000'
!
do ix=0,999999
   write(num(3:),'(I6)')ix
   do j=3,8
      if(num(j:j)==' ')num(j:j)='0'
   end do
   x = ix/million
   read(num,'(F8.6)')y
   if(x /= y)write(*,10)num,x,y
end do
10 format(1x,A8,2x,Z16,1x,Z16)
end

I tested this program with the 17.0.2 compiler (IA-32) on openSuse 13, but the problem is probably present in earlier Ifort versions on Linux-32 as well.

The program does not produce any output on Windows with IFort, and on Linux and Windows with Gfortran 6.3.

 

0 Kudos
Reply