Intel® Fortran Compiler
Build applications that can scale for the future with optimized code designed for Intel® Xeon® and compatible processors.
28446 Discussions

Very small error Progressively increasing

S__MPay
Beginner
777 Views

I am doing an iterative calculation, but due to a progressively increasing error results start to diverge.

I was not able to find any mistake in the code especially that I am using penalty method which has a very small intrinsic error.

Since the error is small, initially iteration is converging but I noticed that the error duplicates with each iteration:

This is the outcome of the calculations, Error = strain increment 2 - strain increment 2:

strain increment 1 strain increment 2 Error
0.00000200000000010109 0.00000200000000010121 1.20E-19
0.00000360000000018082 0.00000360000000018105 2.30E-19
0.00000488000000024377 0.00000488000000024424 4.70E-19
0.00000590400000029337 0.00000590400000029432 9.50E-19
0.00000672320000033230 0.00000672320000033416 1.86E-18
0.00000737856000036248 0.00000737856000036621 3.73E-18
0.00000790284800038518 0.00000790284800039267 7.49E-18
0.00000832227840040082 0.00000832227840041586 1.50E-17
0.00000865782272040866 0.00000865782272043871 3.01E-17
0.00000892625817640571 0.00000892625817646584 6.01E-17
0.00000914100654118522 0.00000914100654130545 1.20E-16
0.00000931280523297264 0.00000931280523321313 2.40E-16
0.00000945024418633035 0.00000945024418681130 4.81E-16
0.00000956019534887217 0.00000956019534983409 9.62E-16
0.00000964815627861697 0.00000964815628054086 1.92E-15
0.00000971852502183561 0.00000971852502568338 3.85E-15
0.00000977482001525616 0.00000977482002295169 7.70E-15
0.00000981985600768391 0.00000981985602307498 1.54E-14
0.00000985588479700876 0.00000985588482779092 3.08E-14
0.00000988470781923398 0.00000988470788079830 6.16E-14

At iteration 37 the error is 1.61e-8 which is very big and finally results start to diverge.

I wonder if I can get rid of this error by trimming decimals after 12th digit?

Unfortunately I don't know how to trim a decimal to 12 digits? e.g. 9.99336917279437e-06 => 9.99336917279400e-6

I cannot simply use dnint(variable * 10d-12) / 10d12 since for the exponential of -6 it will trimmed with 6 digits.

 

0 Kudos
1 Solution
Roman1
New Contributor I
777 Views

 

Try the following:

a = 9.99336917279437d-06
b = -floor(log10(a))

c = int(a*(10**b)*1.d12) / ((10**b)*1.d12)

 

View solution in original post

0 Kudos
9 Replies
Roman1
New Contributor I
778 Views

 

Try the following:

a = 9.99336917279437d-06
b = -floor(log10(a))

c = int(a*(10**b)*1.d12) / ((10**b)*1.d12)

 

0 Kudos
jimdempseyatthecove
Honored Contributor III
777 Views

When performing an integration loop with fractional time quanta:

! poor coding
quanta = 0.001
t = 0.0
do i=0, nIteratons
  ...
  t = t+quanta
end do

====================
! better coding
quanta = 0.001d0 ! expressly specify rhs is double when lhs is double
do i=0, nIteratons
  t = quanta * I
  ...
end do

You may have similar issues elsewhere in your code

Jim Dempsey

0 Kudos
S__MPay
Beginner
777 Views

jimdempseyatthecove wrote:

When performing an integration loop with fractional time quanta:

...

====================
! better coding
quanta = 0.001d0 ! expressly specify rhs is double when lhs is double
do i=0, nIteratons
  t = quanta * I
  ...
end do

...

Jim Dempsey

Thank you for your kindly replies. it seems rounding the numbers to eliminate extra decimal seems to prevent the error from increasing.

I always try to remember to use d0 to make the constants double precision. I found a couple of places that for inverse of a variable I had used A_inv = 1 / A over A_inv = 1.d0 / A. Fixing it how ever didn't prevent the error.

I am also using the old style real*8 to define double variables to obey the older code. I know it is not obeying the standard way but I could also use

      use, intrinsic :: iso_fortran_env, only: REAL64
      integer, parameter  :: dp = REAL64

the replace real*8 with real(dp), but since we are using Intel compiler it will not make a difference; right?

 

0 Kudos
andrew_4619
Honored Contributor II
777 Views

The different real definitions styles won't make a difference they are all the same thing at the end. A am perplexed that truncating numbers makes it more "accurate" logic would say it would less accurate. We don't know anything about the nature of your code, is it that the 'next guess' values for the iteration need "normalising" every few iterations to ensure consistency with the physical thing the represent?

0 Kudos
S__MPay
Beginner
777 Views

The next guess value is detected using Newton-Raphson method. The differential of the function however is a matrix (stiffness matrix). The matrix is originally singular and becomes non-singular by multiplying specific members to a very large number (penalty number; e.g. 10^30) to satisfy known boundary condition. Using this method to make matrix non-singular is not 100 percent accurate but the error becomes smaller as the chosen number is bigger. So practically we can make the error as small as we want. It is very difficult to find where is the source of increasing error but is might be those small errors that somehow find a way to add up in each iteration and put me into trouble!

0 Kudos
jimdempseyatthecove
Honored Contributor III
777 Views

The error, determined with specific members multiplied by very large number (penalty number; e.g. 10^30), may very well flush out significant bits of the smaller numbers. Then the error determined with the combined numbers can appear smaller, yet the actual result (sans the penalty) will have unknown loss of error significance. Taken to extreme, you could set the penalty at +Infinity, get an error estimate of 0.

For arrays of elements of large variation of magnitudes, where you are summing values, it is most accurate to produce the sum in order from smallest magnitude to largest magnitude. If it is impractical to do this (sort the matrix), you can sum sections of the array. sketch

theSum = 0.0d0
from = huge(from)
to = tiny(to)
do i=1,ubound(array)
  if(abs(array(i) < from) from = abs(array(i))
  if(abs(array(i) > to) to = abs(array(i))
end do
do while(from < to)
  step = from + from
  do i=1,ubound(array)
    if((abs(array(i)) >= from) .and. (abs(array(i)) .lt. step)) theSum = theSum + array(i)
  end do
  from = from + step
end do

for lesser precision you could use step = from * 4

Jim Dempsey

0 Kudos
jimdempseyatthecove
Honored Contributor III
777 Views

The above algorithm can have unnecessary passes through the while loop. The following may be better:

theSum = 0.0d0
from = huge(from)
to = tiny(to)
do i=1,ubound(array)
  if(abs(array(i) < from) from = array(i)
  if(abs(array(i) > to) to = array(i)
end do

step = from * 2 ! next bit larger
do while(from < to)
  nextMin = to ! ** edit ** was huge(nextMin)
  do i=1,ubound(array)
    if((abs(array(i)) >= from) .and. (array(i) .lt. step)) theSum = theSum + array(i)
  else
    if(abs(array(i)) < nextMin) nextMin = abs(array(i))
  end do
  from = from + step
  step = from * 2 ! new minimum step
  do while(step < nextMin) ! find next bit position of significance
    step = step * 2
  end do
end do

Jim Dempsey

0 Kudos
S__MPay
Beginner
777 Views

jimdempseyatthecove wrote:

The error, determined with specific members multiplied by very large number (penalty number; e.g. 10^30), may very well flush out significant bits of the smaller numbers. Then the error determined with the combined numbers can appear smaller, yet the actual result (sans the penalty) will have unknown loss of error significance. Taken to extreme, you could set the penalty at +Infinity, get an error estimate of 0.

For arrays of elements of large variation of magnitudes, where you are summing values, it is most accurate to produce the sum in order from smallest magnitude to largest magnitude. If it is impractical to do this (sort the matrix), you can sum sections of the array.

...

Jim Dempsey

Dear Jim, thanks for your important note.

0 Kudos
jimdempseyatthecove
Honored Contributor III
777 Views

See edit in number 8 for correction

Jim Dempsey

0 Kudos
Reply