- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
We have some code which we compile using only /Od, with no other optimization settings, and we get numerical differences between these two compilers.
We have been investigating this, assuming it must be some flawed or old code (we have tons of heritage code), and in one spot we set a number to 1e-6 and then do a .GTE.1e-6 test on that number. Intel 12 returns TRUE (correctly) and Intel 10 returns FALSE in that comparison.
It almost looks as if Fortran 12 by default gives results that correspond more closely to when you compile Fortran 10 with the floating point strict option? We are still investigating this.
Does anyone know how the /Ox and /Od options impact or change floating point model and optimizations? Are there flags that will give the same numerical results in development and product builds, but lets you debug and inspect variables in development debug builds?
Would we be better making use of the default floating point options assuming they give a good balance between speed and exactness?
If you have code and you just want to get the same numbers as in previous Fortran versions, what are the best flags to use?
Our default compile flags are:
/Zi
/Od
/W1 /D "WIN32" /D "_WINDOWS" /D "_MBCS" /D "_USRDLL" /c
/nologo /warn:nofileopt /Qzero /Qsave /align:rec1byte
/check:bounds
/iface:mixed_str_len_arg
/include:"c:\\PROGRA~2\\intel\\COMPOS~1\\compiler\\include\\ia32"
/include:"c:\\PROGRA~2\\intel\\COMPOS~1\\compiler\\include"
/check:bounds /debug:full /dbglibs /warn:declarations /check:uninit /compile_only /dll /threads /assume:byterecl /libs:dll
For release builds we flip the /Od to /Ox.
1 솔루션
- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
The big difference is that version 11.0 and later generates SSE2 code by default where older versions generate X87 code by default. This means that your program is seeing unpredictable changes in precision due to X87 code sometimes doing single-precision operations in double precision.
There is no guarantee of bit-for-bit compatibility of floating point results - these can change with different optimization choices, improvements to the math library and more. But if you want to more closely approximate what version 10 gave you, which you agree is less correct, add /arch:ia32 . I will comment that comparing floating point values for equality is risky no matter what.
There is no guarantee of bit-for-bit compatibility of floating point results - these can change with different optimization choices, improvements to the math library and more. But if you want to more closely approximate what version 10 gave you, which you agree is less correct, add /arch:ia32 . I will comment that comparing floating point values for equality is risky no matter what.
링크가 복사됨
4 응답
- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
The big difference is that version 11.0 and later generates SSE2 code by default where older versions generate X87 code by default. This means that your program is seeing unpredictable changes in precision due to X87 code sometimes doing single-precision operations in double precision.
There is no guarantee of bit-for-bit compatibility of floating point results - these can change with different optimization choices, improvements to the math library and more. But if you want to more closely approximate what version 10 gave you, which you agree is less correct, add /arch:ia32 . I will comment that comparing floating point values for equality is risky no matter what.
There is no guarantee of bit-for-bit compatibility of floating point results - these can change with different optimization choices, improvements to the math library and more. But if you want to more closely approximate what version 10 gave you, which you agree is less correct, add /arch:ia32 . I will comment that comparing floating point values for equality is risky no matter what.
- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
Thank you Steve Lionel. We have decided to just accept the Fortran 12 numbers.
- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
We have investigate some numerical differences between version 11 and versions 12.x of the compiler.
Generally the numerical results are exactly the same for code generated in the different compilers. However, some functions such EXP from C code that call into the Fortran runtimes libraries (LIBMMD) use very slightly different constants in the evaluation of this function and thus return very slightly different results. The differences are very slight, but just in case anyone is wondering. This is likely a slight improvement in the Intel runtime libraries.
- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
x87 math libraries (in use with the 32-bit /arch:IA32) ought to be more accurate than the libraries which use SSE code. The 12.x compilers introduced an additional set of libraries which are invoked by /Qimf-arch-consistency, probably with intermediate accuracy and speed. exp() in particular requires extended precision for range reduction, which will be handled differently in the various implementations.
12.x compilers may vectorize more math function calls than 11.x did. Such changes would show up under /Qvec-report. Otherwise, it is difficult to think why you may see differences when using the same options.
12.x compilers may vectorize more math function calls than 11.x did. Such changes would show up under /Qvec-report. Otherwise, it is difficult to think why you may see differences when using the same options.
