- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello
i use Visual Studio 2008 (Version 9.0.21022.8) and Intel Fortran Compiler 11.0.3454.2008.
I wrote a simple program that should calculate this equation:
x1+x2-x3-x4*3
with
x1 =32.04243
x2 =18.01534
x3 =44.00995
x4 = 2.01594
with these parameter the sum had to be 0.0.
This is my program:
program test
!molar mass of species[kg/kmol]
real*8 :: x1,x2,x3,x4
real*8 :: Massdotsum
x1 =32.04243D0
x2 =18.01534D0
x3 =44.00995D0
x4 = 2.01594D0
Massdotsum=0.0D0
print*,'0', Massdotsum
Massdotsum=x1+x2-x3-x4*3.0D0
print*, x1,x2,x3,x4,x4*3.0D0
print*,'1', Massdotsum,x1+x2-x3-x4*3.0D0
pause
end program
but the result of the program is always:
1.7763568E-15
has anybody an idea what mistake i make,
or is this a bug?
Thanks for your help!
Bastian
i use Visual Studio 2008 (Version 9.0.21022.8) and Intel Fortran Compiler 11.0.3454.2008.
I wrote a simple program that should calculate this equation:
x1+x2-x3-x4*3
with
x1 =32.04243
x2 =18.01534
x3 =44.00995
x4 = 2.01594
with these parameter the sum had to be 0.0.
This is my program:
program test
!molar mass of species[kg/kmol]
real*8 :: x1,x2,x3,x4
real*8 :: Massdotsum
x1 =32.04243D0
x2 =18.01534D0
x3 =44.00995D0
x4 = 2.01594D0
Massdotsum=0.0D0
print*,'0', Massdotsum
Massdotsum=x1+x2-x3-x4*3.0D0
print*, x1,x2,x3,x4,x4*3.0D0
print*,'1', Massdotsum,x1+x2-x3-x4*3.0D0
pause
end program
but the result of the program is always:
1.7763568E-15
has anybody an idea what mistake i make,
or is this a bug?
Thanks for your help!
Bastian
Link Copied
13 Replies
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
If your intention is to calculate
(x1+(x2-x4*3)) -x3
you must write it with parentheses.
If the compiler required the option /assume:protect_parens in order to get the correct result with parentheses added, you might consider that a bug. Without the parentheses, the compiler can't be expected to analyze your code to figure out the most accurate order of operations.
Yes, when you have alternating signs and proceed in order of increasing magnitude, you can minimize roundoff error without invoking extra precision. If you want extra precision, you should be able to get it with the 32-bit compiler with /arch:IA32 -Qpc80
(x1+(x2-x4*3)) -x3
you must write it with parentheses.
If the compiler required the option /assume:protect_parens in order to get the correct result with parentheses added, you might consider that a bug. Without the parentheses, the compiler can't be expected to analyze your code to figure out the most accurate order of operations.
Yes, when you have alternating signs and proceed in order of increasing magnitude, you can minimize roundoff error without invoking extra precision. If you want extra precision, you should be able to get it with the 32-bit compiler with /arch:IA32 -Qpc80
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
makes one wonder: when is a 0 a 0? when there are 14 0's in front of it?
1.7763568E-15??
(of course this is when comparing it to the numbers you were working with to arrive at that number)
i'm reminded of the 1977 Forsythe, Malcolm 'computer methods for mathematical computers' which had the simple 'machine epsilon program:
"the accuracy of floating point arithmetic can be characterized by machine epsilon, the smallest floating point number such that : 1 + > 1"
the program went as follows:
Eps = 1.
10 Eps = 0.5*Eps
epsp1 = Eps + 1.
if (Epsp1 .gt. 1.) go to 10
(gotta love them goto's!)
Then use that epsilon to decide when 0 is 0 (or close enough for government work (ok, that's oild too...back in the day when govt funded research)
brian
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
When you do calculations with none integers, it is very unlikely that your result is what you expect.
Our math teacher back in school show us the trick 1 / 3 * 3 = 0.9999999999.
Somewhere in this forum was a old, but very good article series which dealt with how floating point operations are being done by the CPU but I cant find it any more.
So, you have to accept the fact that your CPU result differs from the real result. It becomes worse when you do something like
if a <= b
Just say, a is your result 1.7763568E15 and b is 0, so the if would return false as a result. For that you need some epsilon to do the trick (but without any gotos...)
Markus
Our math teacher back in school show us the trick 1 / 3 * 3 = 0.9999999999.
Somewhere in this forum was a old, but very good article series which dealt with how floating point operations are being done by the CPU but I cant find it any more.
So, you have to accept the fact that your CPU result differs from the real result. It becomes worse when you do something like
if a <= b
Just say, a is your result 1.7763568E15 and b is 0, so the if would return false as a result. For that you need some epsilon to do the trick (but without any gotos...)
Markus
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I seem to remember that there iseven a theoretical difficulty to determine if a particular
expression is exactly zero (quite apart from floating-point approximations).
The above algorithm to determine epsilon fails in two ways:
- It determines epsilon/2
- On machines that use extended precision but store the data as single or double precision,
you get a value that is way too small
Regards,
Arjen
expression is exactly zero (quite apart from floating-point approximations).
The above algorithm to determine epsilon fails in two ways:
- It determines epsilon/2
- On machines that use extended precision but store the data as single or double precision,
you get a value that is way too small
Regards,
Arjen
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
this am work up early to send out a file and on startup my computer gave me the ole...chkdsk needed issue...
so while waiting for the 'main' computer to come to life on another computer for the fun of it i did a single (REAL) and double precision Real(8) version of the EPS program from Forsythe and guess what came out of it...
so while waiting for the 'main' computer to come to life on another computer for the fun of it i did a single (REAL) and double precision Real(8) version of the EPS program from Forsythe and guess what came out of it...
single precision: 24 steps: eps=0.000000059604644775390625000000000000000000000000
double precision: 53 steps: eps=0.000000000000000111022302462515654042363166809082
that double precision number of zeros is mighty close the the E-15 number you found.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Be careful. The epsilon you list eps=0.000000059604644775390625000000000000000000000000 (the single precision) is only valid when comparing with a number who's magnitude is approximately 1.0. If the numbers you deal with are on the order of 1.0E+6 then eps=0.059604644775390625000000000000000000000000. Same issue with more precision when delaing with double precision. Any eps has to be crafted for the numbers you are dealing with.
Jim Dempsey
Jim Dempsey
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
You can use the nearest() function: it returns the greatest number smaller than a given one
or the lowest number larger than a given one:
next_greater = nearest( x,1.0)
next_lower = nearest( x, -1.0 )
Regards,
Arjen
or the lowest number larger than a given one:
next_greater = nearest( x,1.0)
next_lower = nearest( x, -1.0 )
Regards,
Arjen
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
RRSPACING may also be handy.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Steve, Im looking for an article series I mentioned before.
Its back from the Compaq days IIRC. There are several parts dealing with how floating point operations work, what NAN means and so on.
Do you know the link? I dont know if it is on some Intel server or an external link.
Its back from the Compaq days IIRC. There are several parts dealing with how floating point operations work, what NAN means and so on.
Do you know the link? I dont know if it is on some Intel server or an external link.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
How about this link:
What Every Computer Scientist Should Know About Floating-Point Arithmetic by David Goldberg.
What Every Computer Scientist Should Know About Floating-Point Arithmetic by David Goldberg.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
You could also take a look at the Visual Fortran Newsletter (archive) under Useful Links and see Steve's articles on "The perils of real numbers" parts 1 and 2
(Originally written by Dave Eklund of Compaq I believe, Steve will correct me if I'm wrong)
Les
Les
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Les is correct - Dave's articles can be found here. Dave came along with me to Intel and is still with us here.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The Perils of real numbers - That was the article I was looking for!
Thanks a lot!
Thanks a lot!

Reply
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page