This is the contents of file xxx.f90:
program
xxx implicit real*8(a-h,o-z) integer ihomega=0.0001d0
omega_center=17.780d0
gamma=0.120d0
omega=omega_center-20.d0*gamma
i_Do:
do i=0,100000 if( omega .gt. omega_center+20.d0*gamma) exit if(omega .ge. omega_center+4.d0*gamma .and. omega .lt. omega_center+20.d0*gamma) then; omega=omegamax3+dble(i-imaxlev3)*100.0d0*homega; imaxlev4=i; omegamax4=omega; endif if(omega .ge. omega_center+2.d0*gamma .and. omega .lt. omega_center+4.d0*gamma) then; omega=omegamax2+dble(i-imaxlev2)*10.0d0*homega; imaxlev3=i; omegamax3=omega; endif if(omega .ge. omega_center-2.d0*gamma .and. omega .lt. omega_center+2.d0*gamma) then; omega=omegamax1 +dble(i-imaxlev1)*homega; imaxlev2=i; omegamax2=omega; endif if(omega .ge. omega_center-4.d0*gamma .and. omega .lt. omega_center-2.d0*gamma) then; omega=omegamax0 +dble(i-imaxlev0)*10.0d0*homega; imaxlev1=i; omegamax1=omega; endif if(omega .ge. omaga_center-20.d0*gamma .and. omega .lt. omega_center-4.d0*gamma) then; omega=omega_center-20.d0*gamma+dble(i)*100.0d0*homega; imaxlev0=i; omegamax0=omega; endif write(*,*) omega,i if(omega .eq. omega_center-20.0d0*gamma) then; write(*,*) "omega=17.78-20Gamma"; pause; endif if(omega .eq. omega_center-4.0d0*gamma) then; write(*,*) "omega=17.78-4Gamma"; pause; endif if(omega .eq. omega_center-2.0d0*gamma) then; write(*,*) "omega=17.78-2Gamma"; pause; endif if(omega .eq. omega_center+2.0d0*gamma) then; write(*,*) "omega=17.78+2Gamma"; pause; endif if(omega .eq. omega_center+4.0d0*gamma) then; write(*,*) "omega=17.78+4Gamma"; pause; endif if(omega .eq. omega_center+20.0d0*gamma) then; write(*,*) "omega=17.78+20Gamma"; pause; endif enddo i_Doend
program xxxI`m waiting to see pause 6 times when I run this compiled program. But there are only 2 of them.
In my case
omega_center-4.0d0*gamma is equal to 17.30d0
omega_center-2.0d0*gamma is equal to 17.540d0
But this expression "if(omega .eq. omega_center-4.0d0*gamma) then" works properly, and another expression "if(omega .eq. omega_center-2.0d0*gamma) then" works only when I manually write in it the equivalent number for (omega_center-2.0d0*gamma is equal to 17.540d0) in such way "if(omega .eq. 17.540d0) then".
What`s the problem in my program? Thank you for possible help.
連結已複製
Comparing floating-point numbers for equality is your problem.
You should really use a tolerance and check the abs of the difference between the two numbers.
eg :
real(8) :: diff
real(8) :: tolerance! set to some appropriate acceptable value
diff = omega - (omega_center-4.0d0*gamma)
if ( abs(diff) <= tolerance ) then
! the two numbers are considered to be equal
Les
Les_Neilson, thank you.
It is very strange for me that the compiler is not "smart" enough. Alsowhen I used previous 32-bit versions of Intel Fortran Compiler, I don`t remember, that it was so important not to use .eq. for floating point numbers, and instead of this the neccesarility to use this construction:
if ( abs(diff) <= tolerance ) then
It has *always* been recommended practice to compare floating point numbers using a tolerence whatever the compiler in use. Sadly in my youth I did not always listen to my own advice :-)
I recommend you take a look at Steve's Dr Fortran articles on floating point arithmetic which will help you understand why it is necessary.
Les
!
INTERFACE OPERATOR(.equals.)
MODULE PROCEDURE equals_sgl, equals_dbl
END INTERFACE OPERATOR(.equals.)
!
CONTAINS
!
!------------------------------------------------------------------------------------------------
!------------------------------------------------------------------------------------------------
!
ELEMENTAL FUNCTION equals_sgl( a, b )
IMPLICIT NONE
REAL(SP), INTENT(IN) :: a, b
LOGICAL(LGL) :: equals_sgl
!
REAL(SP), PARAMETER :: ten = 10.0_sp
REAL(SP) :: eps
!
equals_sgl = .TRUE.
eps = ten*ABS(a)*EPSILON(a)
IF ( eps == 0.0_sp ) THEN
eps = ten*ABS(b)*EPSILON(b)
IF ( eps == 0.0_sp ) RETURN
END IF
!
IF ( ABS( a - b ) > eps ) equals_sgl = .FALSE.
!
END FUNCTION equals_sgl
!
!------------------------------------------------------------------------------------------------
!------------------------------------------------------------------------------------------------
!
ELEMENTAL FUNCTION equals_dbl( a, b )
IMPLICIT NONE
REAL(DP), INTENT(IN) :: a, b
LOGICAL(LGL) :: equals_dbl
!
REAL(DP), PARAMETER :: ten = 10.0_dp
REAL(DP) :: eps
!
equals_dbl = .TRUE.
eps = ten*ABS(a)*EPSILON(a)
IF ( eps == 0.0_dp ) THEN
eps = ten*ABS(b)*EPSILON(b)
IF ( eps == 0.0_dp ) RETURN
END IF
!
IF ( ABS( a - b ) > eps ) equals_dbl = .FALSE.
!
END FUNCTION equals_dbl
!
!------------------------------------------------------------------------------------------------
!------------------------------------------------------------------------------------------------
!
This is example of generic (overloading), decides which to use based on whether arguments are "single" or "double" precision. I guess for production code I should have a check for both arguments being same precision.
The small ( relative )tolerance I choose to use is 10 times the machine precision.
A use would be thus:
IF ( a .equals. b) THEN ...
My equality check still suffers using "==" comparison for real zero though, and I'm not sure how I'll deal with that in future.
-Brian
Here's how I compare two real(8)'s for equality to within nulps, The technique appears in one of Knuth's "Art of Computer Programming" volumes and isaddressable space invariant.
integer(4)
function fpCompare (x, y, nulps) implicit none real(8), intent(in) :: x, y integer(4), intent(in) :: nulps !Local real(8) difffp, deltafpdifffp = x-y
deltafp = nulps*2*
spacing(max(abs(x),abs(y))) if ( difffp > deltafp ) thenfpCompare = 1
!x > y elseif ( difffp < -deltafp ) thenfpCompare = -1
!x < y elsefpCompare = 0
!x == y endif return end function fpCompareGerry
Cool. I don't have ready access to Knuth (I know, heresy!).
Could you explain how the relative difference is used (effectively normalization to 1, and EPSILON is the smallest (accounting for kind type of real) number which when added to 1 is different than one), if it (normalizaiton) is used at all?
However, and you do not do this, I would not use yours as an elemental procedure since SPACING will look at the largest in each argument (MAX is also elemental).
F2003 (with corrections) reads for SPACING:
Result Value. If X does not have the value zero, the result has the value b**max(e-p,eMIN-1), where b, e, and p are as defined in 13.4 for the value nearest to X in the model for real values whose kind type parameter is that of X; if there are two such values, the value of greater absolute value is taken. If X has the value zero, the result is the same as that of TINY (X). If X is an IEEE infinity, the result is positive infinity. If X is an IEEE NaN, the result is that NaN.
(Italics mine, and the italicized part is why I would not use Knuth's procedure as you've implemented it for an elemental procedure).
Later: NOPE, I'm WRONG. In the case of two array arguments, as you have it would work (reliably I think) as an elemental procedure. What I italicized is if, for single eelement of the array argument, if there are two such values to that one element of the array argument, then the one of greater absolute value is taken.
Still, Might you explain how relative difference (normalization) is used, if at all?
-Brian
Brian-Lamm:
Could you explain how the relative difference is used (effectively normalization to 1, and EPSILON is the smallest (accounting for kind type of real) number which when added to 1 is different than one), if it (normalizaiton) is used at all?
There many 'machine' epsilons over and above the usual eps + 1 > 1. The eps at x aka eps(x)= eps/x/ and in general the eps(x) /= eps(y). The spacing intrinsic accounts for the nonuniform granularity of fp's over their range they being more closely bunched near the origin. fp's within (-tiny,tiny) are of such reduced precision as to be dangerous.I'd rather relyon
http://www.amazon.com/Numerical-Computing-Floating-Point-Arithmetic/dp/0898715717
than f2003 for ieee fp stuff. BTW I see no reason why Knuth's recipe isn't applicable to elemental or pure functions but as usual YMMV.
Gerry
"BTW I see no reason why Knuth's recipe isn't applicable to elemental or pure functions but as usual YMMV."
One can. I corrected myself (see "Later: ... " in my reply beginning with "cool").
Thanks, I'll dig around a little more. I did/do realize fp numbers are more or less logarithmically distributed, but I was under the (apparently incorrect) impression normalization to 1 would account for the non-uniform granularity.
I'll research it more as I believe, I think as you do, Knuth knows what he's writing about.
-Brian
When the 2nd edition of volume 2 (IIRC) appeared I read in the Preface that he was prepared to offer $0.32 to the first person who found an error in the tome. Somewhere within the book I noticed that he added Avagdro's number to Planck's constant upon which I emailed him to claim the prize. He politely wrote back by post that the 3rd volumestated that he would take the liberty ofjumbling apples with oranges so the prize was not awarded (at least not to me). How can you argue with that? At least I gothis autograph, far better than what was then I guess the cost of a postage stamp.
I checked the price of Overton's book on the SIAM site: ca $50.
Gerry
