- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
With our software, we normally do our testing using a DEBUGcompilation so as to get walk-back messages in the event of an error. After testing, we thengenerate a (Visual Studio) RELEASE version for actual work. I got a surprise a while ago when I chanced to discover small but palpable differences in the values generated by DEBUG and RELEASE. By switching off optimization in the latter, I could get them to agree.
Ideem it a dangerous temptation to write off the discrepancies as "small" since I do not know that such will always be the case. Should I simply switch off optimization entirely since it gains us little advantage for this particular software? Is it indicative of a parallelization or other vectorization discrepancy in the interpretation of our code? I have not before worried much about the details of optimization. If it is working properly, should the answers obtained always be EXACTLY the same as with the unoptimized compilation? Or are there good reasons for slight - but unimportant - differences?
Cheers!
Tom Stevens
Ideem it a dangerous temptation to write off the discrepancies as "small" since I do not know that such will always be the case. Should I simply switch off optimization entirely since it gains us little advantage for this particular software? Is it indicative of a parallelization or other vectorization discrepancy in the interpretation of our code? I have not before worried much about the details of optimization. If it is working properly, should the answers obtained always be EXACTLY the same as with the unoptimized compilation? Or are there good reasons for slight - but unimportant - differences?
Cheers!
Tom Stevens
Link Copied
1 Reply
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
If your application depends on the specific combination of double precision evaluation of single precision expressions but all assignments rounded to declared precision, implied by the debug mode, then you do have cause for concern about numerical differences. If your application depends on consistency options such as /assume:protect_parens and /fp:source or /Qprec-div /Qprec-sqrt, you should set those in the release mode build. If it does depend on promotions to double precision, those should be written into the source code.

Reply
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page