Many numerical analysis texts in the past (2000 and before) recommended the following "trick" to reduce finite precision errors when perturbing a variable ex. when calculating a difference interval.
... ! perturbation in variable x h = 0.0001*abs(x) ! temp stores perturbed x temp = x + h ! a call to a dummy routine to prevent the compiler from optimizing (eliminating the line of code above and below) call donothing(temp) ! h is recalculated to "reduce finite precision error" h = temp - x ... subroutine donothing(temp) real :: temp end subroutine donothing
It is claimed that "h is now an exactly represented number on the computer".
1. Is such a logic still valid with modern day compilers and their optimization features?
2. Will the function h = NEAREST(0.0001*abs(x), x) in Fortran 2008+ perform the same "trick"?
>>Is such a logic still valid with modern day compilers and their optimization features?
Modern day optimization will find that subroutine donothing does nothing and the call and subroutine will be completely removed.
While it may avoid removal of donothing when donothing is placed in an external library, the -ipo option (when the library is in the same solution) can still remove the call and subroutine. IOW you must structure the builds such that the calling code does not have the call-ee code available for examination.
>>Will the function h = NEAREST(0.0001*abs(x), x) in Fortran 2008+ perform the same "trick"?
The "trick" above can produce h == h' (before and after the same)
The NEAREST will always* produce h /= h' (* not applicable for NaN)
Thanks, @Jim for the useful responses.
(a) Will the compiler be forced to include DONOTHING if a variable 'dummy' is declared and procedure DONOTHING is changed to
... call donothing(temp, dummy) ... subroutine donothing(temp, dummy) real :: temp, dummy dummy = temp/2. ! now donothing does something end
(b) Good point. I was trying to test this. So what you are pointing out is that if h were indeed an exact number, then the "trick" would leave it unchanged but NEAREST would always change it, exact or not.
A value on the order of EPSILON(x) or somewhere between that and SQRT(EPSILON(x)) is often most scientific and engineering functions can handle as a perturbation in x.
If (when) dummy isn't used after the return from the call to donothing then a good optimizing compiler will still remove the call.
What you might have luck with is to make a new solution that creates a static library, and that this library contains only your donothing. Build the Release version. Then copy the donothing.lib file (or donothing.obj file) to the solution+Project folder and make it a dependency of the project with the caller. Do not make the .obj located in the other solution+project folder the dependency (as the optimizer might be crafty enough to find the sources).
You still have the case where h == h'.
FortranFan's suggestion of using EPSILON is generally a better approach due to being able to scale the return value for your purposes. An example might be: "To within 4 least significant bits" delta=EPSILON(x)*16. One must look at how the numbers are when trying to specify what is significant.
Thanks - this clarifies my concerns on the use of the "trick". Some references also use h = (x + h) - x, which would also be eliminated by optimization. Incidentally, the 0.0001 in my code sample represents SQRT(EPSILON(X)) for single precision calculations. I ran some tests on the code above and it turns out that the "trick" does produce a difference in certain cases that is in the range sqrt(epsilon(x))*epsilon(x) to epsilon(x) i.e. 10^-24 to 10^-16 in double precision, which means that the compiler is not optimizing away the redundant code.