Intel® Fortran Compiler
Build applications that can scale for the future with optimized code designed for Intel® Xeon® and compatible processors.
28445 Discussions

Differences between debug and non-debug engines

ferrad
New User
431 Views
I am wondering whetherI am having a problem with single -> double conversion.
Our GUI reals are singles, and our engine reals are doubles. We have lots of statements to copy the data from GUI to engine using:
d_x1 = s_x1
or sometimes,
d_x2 = dble(s_x2)
etc, where d_ are doubles, s_ are singles.
Now I am investigating differences in the debug and non-debug engines, and am concerned that these may be caused by different garbage being put into the 16 bits the single variable doesn't have.

ie. if the single variable is 1.340000, and the double variable becomes 1.3400002345445 (say) in the debug version, will the double variable always get the samegarbage for the last digits, or will it be different in thenon-debug engine?
I am using the Intel V9.0 compiler. We do not have the same problem with the Compaq v6.6 compiler.
0 Kudos
5 Replies
TimP
Honored Contributor III
431 Views
I think you may not be giving enough information to understand your question. A specific example might help.

If you are copying the same single precision value to double precision in each case, there should be no difference. You could get slightly different single precision values from different orders of expression evaluation with different compilers.

A possible difference would be where you store the result of an expression, with all operands single precision, directly into a double precision variable. If you used ifort 9, with SSE2 options (e.g. -QxW), but did not invoke one of the options which promotes expression evaluation to double (e.g. -Op or -fltconsistency), you would expect the expression to produce a single precision result, which would be extended to double by appending binary zeros. If you departed from those options, you might frequently see a double precision result which is not rounded off exactly to a single precision value. This might happen more frequently with ifort 9 than with CVF, but with CVF you don't have the option of pure single precision SSE.

With either compiler, it might not be feasible to cause the debug build to show identical numerical behavior to a non-debug build, assuming that your optimization levels change.

I've put in a lot of words without knowing how relevant they are to your concerns.
0 Kudos
jim_dempsey
Beginner
431 Views
The two numbers are the same. They won't print the same.
Look at the hex and binary values (adjusting for different number of bits in theexponent)
s
s_var #3FAB851F 1 111111 01010111000010100011111
d_var #3FF570A3E0000000 1111111111 0101011100001010001111100000000000000000000000000000
Try using:
d_temp = ANINT(s_var * 1000000.)
d_var = d_temp / 1000000.
In your original post the two variables are the same.
0 Kudos
ferrad
New User
431 Views

Thanks for the replies. What I really want to know is whether the double precision version of the real copied from the single as above, will always have the same value irrespective of which options are used in the Fortan compilation.

ie. it is not 'garbage' that gets copied into the extra 4 bytes, but rather a set of binary zeros, which will always translate into the same double precision value. Is that correct?

0 Kudos
jim_dempsey
Beginner
431 Views

The answer to that is "it depends". For a value of real(4) copied from memory to a value of real(8) copied into memory then the answer is they are identical in bit patterns as read right to left in the mantissa. However, at a given point in a computation using the virtual FPP87 in the ia-32 areceture if you were to save the value of a partialy completed expression (intermediary value is more than 4 bytes) then the answer is no.

Secondly, When you print the real(4) out you are printing less of the precision of the variable than when you print the real(8) out. Therefore you were able to see the difference in the mantissa of ...03nnn which is less than the 0.5 rounding of the formatted printout. Try using E16.12 on the format for the real(4) printout. You will likely see the stuff you interpret as junk. This junk represents the error in the real(4) due to roundoff of a fraction that exceeds the precision of the real(4). The error in the VAX system using real(4) exceeds that of the error in the real(8) on the Intel system. Expect the computations to produce different results when using real(4) vs real(8) variables in your program.

If you are intending on testing results between the before port on the VAX system to the after port on the IA-32/64 system then specify your data as real(4) when you make your verification runs. Once you are satisfied that computation to the same precision yields reasonable results then you can extend the precision to real(8) to produce better (and different) results.

Jim Dempsey

0 Kudos
Intel_C_Intel
Employee
431 Views
Hello,

Our experience is that mixing single and double precision within one application can cause unexpected/inaccurate numerical results. I would recommend to port everyting to double precision and compile with /real-size:64 /fpconstant /QxN and define all constants with full double precision.

In the GUI you can read/write a part of the digits using FORMAT or equivalent statements (the user should be spared for all the information in the near 16 digits in double precision).

Best Regards,

Lars Petter Endresen
0 Kudos
Reply