Do you think anyone has ever written a Fortran program to estimate the change in temperature as water is boiled in a kettle.
I ask because I used to be able to make a cup of tea in the time it took to compile a program on COMPAQ Portable using MS Fortran 3
Now I can make a cup of tea in the time it takes to run my program that generates "good" mesh for FEM models of buildings and bridges.
PS the latest version of VS 2019 preview does not pick up some changes in files needing to be recompiled -- it is interesting,
Relax, and be thankful! No matter how much the run times of your programs change over the years, you can always check the answers in constant time by reading the tea leaves.
But, my world was shattered when I found the date and time is not running at a constant rate, instead we have a kernal you need to look up to check for leap seconds. I can not longer add 200000 seconds to a time and know the answer.
Time is not linear as measured, of course it is somewhat linear as long as you do not move. Although if I sit still time moves very slowly.
PS My grandmother would read the tea leaves.
>>Time is not linear as measured.
"Time flies like an arrow; fruit flies like a banana"
Speaking of non-linear time, when you write a simulation program with a delta time increment (fractional seconds) you should use:
T = T + dT ! advance time interval
T = BaseTime + dT * integrationStep
The reason for doing so is to avoid a potential of accumulation of round-off errors (one for each advancement of dT). Using the product form introduces only a single instance of potential round-off error.
T = T + dT ! advance time interval
It all depends on how many bites you use for T and dT. 4 bytes gives a surprisingly poor performance, failing beyond about 10^6 time steps, while 8 bytes does not present a problem for typical values I have used.
Was it coffee and cake or just a cookie for the compile ?
After a few bites of a small cookie you might return disappointed.
One of the bugs I found in the CEQUAL-W2 model was exactly this. The model has a variable time step, based on a stability test relating to volume replacement in grid cells. The variable keeping track of time was single precision, representing simulation day and didn't become apparent until the days got up into 1000 or so, and showed up as mass balance error. Another problem I had to fix was in determining when to change inflow values in the time series of inflows. The logic went something like this. jday = jday + delta and then if jday > time for next input change then swap to new input value. The problem here is that delta was changing with the inflows, such that you'd get small deltas when inflows were large, and small deltas when inflows were low. The cumulative effect of this is that low flows were applied longer than they should have been more than large flows, again leading to mass balance errors. The cure was to put in a check after computing a new delta that didn't allow it to be larger than the distance to the next change in time series inputs.
>>In my dynamic simulations dT is not constant but may vary between dTmin and dTmax so I have to write: Tn = Tn + dT
My simulations also use dynamic changing of dT. There are many instances where modifying of dT are useful (required). Such as:
When approaching collision of particles
When approaching transition from slack to taut or taut to slack
When approaching yield point
In these situations consider using:
Tn = Tn + dT * TicksSinceChangeInDeltaT
And when you change dT, set TicksSinceChangeInDeltaT=1 (or 0 depending on where you place the increment).
Also, be mindful that, depending on the simulation, a change in dT may need to be gradual.
cryptogram>> The cure was to put in a check after computing a new delta that didn't allow it to be larger than the distance to the next change in time series inputs
cryptogram's experience is typical of accumulation of round-off errors can affect a simulation, and in his case in a large way. The description of his cure indicates that his change in dT requires more care than a simple change (as I described in #8). In his (?her) case, the change had to fulfill specific anniversary requirements. The change of dT is not necessarily a trivial coding matter, and should be taken with care.
The change of dT is not necessarily a trivial coding matter, and should be taken with care.
I am finding that out with the OED program, unfortunately I do not have all the data Fryba had and so cannot match his results, and the start time can be very problematic - as a coding problem changing dt is harder than the original problem. It takes a short OED program and turns it nto pages of hard to follow code.
I was however asking about boiling water -- interesting how these random problems turn into interesting streams - like the moon shot problem.
Node 1 0 0.100000001490E+00 0.100000001490E+00 0.100000001490E+00
Node 2 0 0.200000002980E+00 0.100000001490E+00 0.100000001490E+00
Node 3 0 0.200000002980E+00 0.200000002980E+00 0.100000001490E+00
Node 4 0 0.100000001490E+00 0.200000002980E+00 0.100000001490E+00
Node 5 0 0.100000001490E+00 0.100000001490E+00 0.200000002980E+00
Node 6 0 0.200000002980E+00 0.100000001490E+00 0.200000002980E+00
Node 7 0 0.200000002980E+00 0.200000002980E+00 0.200000002980E+00
Node 8 0 0.100000001490E+00 0.200000002980E+00 0.200000002980E+00
Node 9 0 0.200000002980E+00 0.100000001490E+00 0.100000001490E+00
The results from using a REAL(4) in place of a real real -- has a visible impact.