Intel® Fortran Compiler
Build applications that can scale for the future with optimized code designed for Intel® Xeon® and compatible processors.

Optimization Debugging

Groundsel
Beginner
323 Views
With our software, we normally do our testing using a DEBUGcompilation so as to get walk-back messages in the event of an error. After testing, we thengenerate a (Visual Studio) RELEASE version for actual work. I got a surprise a while ago when I chanced to discover small but palpable differences in the values generated by DEBUG and RELEASE. By switching off optimization in the latter, I could get them to agree.

Ideem it a dangerous temptation to write off the discrepancies as "small" since I do not know that such will always be the case. Should I simply switch off optimization entirely since it gains us little advantage for this particular software? Is it indicative of a parallelization or other vectorization discrepancy in the interpretation of our code? I have not before worried much about the details of optimization. If it is working properly, should the answers obtained always be EXACTLY the same as with the unoptimized compilation? Or are there good reasons for slight - but unimportant - differences?


Cheers!
Tom Stevens
0 Kudos
1 Reply
TimP
Honored Contributor III
323 Views
If your application depends on the specific combination of double precision evaluation of single precision expressions but all assignments rounded to declared precision, implied by the debug mode, then you do have cause for concern about numerical differences. If your application depends on consistency options such as /assume:protect_parens and /fp:source or /Qprec-div /Qprec-sqrt, you should set those in the release mode build. If it does depend on promotions to double precision, those should be written into the source code.
0 Kudos
Reply