So here is the problem. I've got some older, but well written, legacy code that is used in some important software. In the process of making some upgrades to the software we realized that, simply by compiling the software with a new compiler (Composer XE), the results of the calculations are different...by a bunch (>10%). Digging through I see that the problem has to do with rounding errors. The crucial part of the code deals with the difference of the square of large numbers. The delta between these number is small, so it is near the noise for single precision math. The problem is that this code is currently being used, and has been used in the past, for high-profile decision making based on an executable compiled back in the Compaq Fortran days. There is no way I can 'upgrade' the software and have it produce different results. My question is, are there any compiler switches or changes I can make to force the compiler to use older single precision math? I could go back to using the old compiler, but that is becoming more and more difficult, and is hardly forward looking. I am looking into changing things to double precision, but again, that will likely change the answers, which is not acceptable at this point.
Thanks!
链接已复制
xy = exp(y*logex) = 2y*log2xEvaluation of the transcendental functions may be much slower, and problems could arise if the variable x were not positive.
