- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Do we have a function to round the digits of real constants? Say something similar to ROUND function in MS Excel.
I knew there is a work around - NINT(x * 10.0) *0.1 to round to single decimal. But looking for a direct function, if available.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
>> if (abs(x(i)/x(i-1) - 1) < 1e-4) then
That would catch "within 4 significant digits) and not 4 decimal places. (you can change the 1e... to vary the number of digits)
mohanmuthu,
Consider data points: A, B, C with values nn.01, nn.10, nn.19 respectively.
A ~= B, and B ~= C, however A !~= C (within +/- 0.1)
Depending on how you construct your tests, you might end up with A,B,C being considered duplicates.
Or, B and C might be considered duplicates.
Alternately consider two points: A, and B: with values nn.09 and nn.11
Do you want these considered to be approximately equal? (they have 0.02 difference)
Your original solution is not suitable for testing the approximately equal for the above situation.
Consider you are testing for (and eliminating) duplicates.
At some point arbitrary point A is found to be approximately equal to B. You may want all subsequent tests to be performed on the center point of A and B. But then this may necessitate retesting the prior values compared against A and B (because the "location" of A and B has changed).
Furthermore, in situations where you have multiple approximately equal proximities, you have the question of which one pair of proximities do you choose as your initial center point. The simplest of first one encountered, might not be correct.
Your choice on how to handle this is not as clear as initially expected.
Jim Dempsey
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
In Fortran, which is a compiled language, numbers are stored and processed in the IEEE binary representation, which uses base-2. Therefore, rounding in the base-ten representation is unnatural and time-consuming. Fortunately, such rounding is encountered mostly when doing formatted output, and sometimes when doing formatted input. Why do you need rounding, and how accurate do you want it to be?
You can see some of the problems associated with rounding in a recent discussion: http://forums.silverfrost.com/viewtopic.php?t=3160 .
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thank you mecej4.
I deal with real*4 vector of size 8E6 rows, where my first step is to do some math and next is to remove the duplicates. Since I am dealing with the difference between consecutive entries (and further much more processing of the data and differences), if I don't round off at first and remove duplicates, my code will be processing everything and finally discarding many of them if the difference is less than 0.1. And I have to repeat such steps for about 50000 set of vectors. That summarizes the benefit I am targeting by rounding off to one decimal.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
What is your definition of "duplicate"? Could you use code such as the following?
if (abs(x(i)/x(i-1) - 1) < 1e-4) then
....process duplicate
else
..... process distinct
end if
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
mecej4 wrote:
What is your definition of "duplicate"? Could you use code such as the following?
if (abs(x(i)/x(i-1) - 1) < 1e-4) then
....process duplicate
else
..... process distinct
end if
Thank you. If it is faster and less memory consuming than NINT(x * 10.0) *0.1, I would definitely go your way.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
>> if (abs(x(i)/x(i-1) - 1) < 1e-4) then
That would catch "within 4 significant digits) and not 4 decimal places. (you can change the 1e... to vary the number of digits)
mohanmuthu,
Consider data points: A, B, C with values nn.01, nn.10, nn.19 respectively.
A ~= B, and B ~= C, however A !~= C (within +/- 0.1)
Depending on how you construct your tests, you might end up with A,B,C being considered duplicates.
Or, B and C might be considered duplicates.
Alternately consider two points: A, and B: with values nn.09 and nn.11
Do you want these considered to be approximately equal? (they have 0.02 difference)
Your original solution is not suitable for testing the approximately equal for the above situation.
Consider you are testing for (and eliminating) duplicates.
At some point arbitrary point A is found to be approximately equal to B. You may want all subsequent tests to be performed on the center point of A and B. But then this may necessitate retesting the prior values compared against A and B (because the "location" of A and B has changed).
Furthermore, in situations where you have multiple approximately equal proximities, you have the question of which one pair of proximities do you choose as your initial center point. The simplest of first one encountered, might not be correct.
Your choice on how to handle this is not as clear as initially expected.
Jim Dempsey
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Jim,
Thank you for the very thoughtful reply. In fact as I started implementing, I encountered the same issue of comparing C to A or B, if A and B were very close. My thought is to replace B with A, if B is approximately equal i.e., theoretically A will continue to be center point until a distinct point is found. This would help me come out of situation of accumulating small differences to big one, if consecutive points were located pretty close but with smaller increment/decrement between them.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
You have to consider what you actually need to do. If you are trying to simulate some physical process the physical world tends to look at all possibilities at the same time, then choose the best, or potentially chooses not to choose. If this is simply a programming shortcut, then first match within proximity might be sufficient.
It appears as if you are concerned about the number of points consolidated into one:
a) first two points in proximity in search of points not already paired
b) closest two points in proximity in search of points not already paired
c) first n>2 points in proximity in search of points not already paired
d) closest n>2 points in proximity in search of points not already paired
Note, in the case of programming shortcut, a choice of the above is relatively clear cut.
In the case of simulating a physical process, it is more compute intensive
Pseudo code:
SubSet = Array TwoPoints = Closest(SubSet) do while(Proximity(TwoPoints) < cutoff) SubSet = Squish(SubSet, TwoPoints) end do
Jim Dempsey
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The Array and SubSet are arrays of entities (particles). The particle would include an artificial property indicating it is a condensate of one or more particles (and/or prior condensates). The Closest function can optionally exclude from consideration condensate particles of more than n native particles.
It is up to you to add conditional code to handle a Closest function call with a SubSet containing one particle (or condensate particle).
Jim Dempsey
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page