- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I have a program which is heavily dependent on input information via .dat type text files. The nature and quantity of information in these files sort of determines how long it will take to run. The program can take anywhere between 0 to an infinite time to run. Because this can bog down the system and interrupt workflow, I would like to determine if Visual Studio/Intel Fortran provides a way to estimate the running time prior to execution.
Please let me know if anyone has any ideas. Thank you very much....
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I don't see how such a thing would be possible. You say that the work done depends on the input - how could any tool intelligently estimate this for an arbitrary application? What would you do with this information if you had it?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Many programs that involve one or more iterative algorithms provide the user some control over computer resources being wasted. The number of iterations allowed, the termination criteria or even the total run time and memory consumption can be used to abort a runaway program.
On the other hand, as we know, programs and compilers (which are also programs) may contain bugs and we have to coexist with those bugs. Bugs may make the program run longer, give incorrect results, or hang up the computer. Unless you can factor in the effect of bugs on the desired but probably unobtainable estimates of run time, those estimates would be of little use.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Let me simplify the problem. Suppose I had something simple in VS. For example, a solution with one project. The project has no headers, no resource files. Just one source file .FOR program containing a simple 'Hello world'
program hello print *, "Hello World!" end program hello
There you go. No input or output files. Just that. The question is: Is there a way in Visual Studio/Intel Fortran to determine how long this program will take to run before it actually runs?
The utility of this information is straightforward. For more complex programs it can enable the user to predict how long a computation will take before actually running it. This can prevent running programs which will take an unreasonable time to run and waste time/resources.
I should probably have posted this as my first question... lol
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Consider this program:
program chkrandom implicit none real :: r do call random_number( r ) if ( r == 0.5 ) then write(*,*) '0.5 will be generated!' stop endif enddo endprogram
Of course, it is a silly, useless program, but how would you predict the run-time? You might examine the algorithm behind random_number to see if it would even be able to produce such a number (there is in general no guarantee that a random number generator ever produces a particular number). But that means going into details that are not evident from the code.
You are asking the wrong question. Instead, if this is indeed of practical use, you should analyse the algorithm of the program to estimate the number of steps it takes to run to completion. There are many techniques for this, but they are mathematical in character and not specific to Fortran. To relate those estimates to the implementation, measure! Run your program with different input whose size you can relate to the estimates and measure how the program takes.
Mind you, such measurements are influenced by the phase of the moon and whether your computer is having a bad mood or not.
I am not entirely kidding: performance comparisons are a hideously difficult enterprise to get reliable. And that is most of the time for small well-defined tasks. Now you ask this for a very general, unspecified program.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Amine A. wrote:
it can enable the user to predict how long a computation will take before actually running it. This can prevent running programs which will take an unreasonable time to run and waste time/resources.
In this instance you need to added some progress indicator to the software, and allow a user to quit if they chose to. For a example on a large loop on some time period basis output "128 steps of 122000 completed in 2.6 minutes...." This is good practice as otherwise you cannot often tell that an application is hung and will never complete. The compiler cannot help you, you the programmer must do it.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
>>print *, "Hello World!"
Well then, let's look at that for the moment.
Back when I started programming, that print statement (should it have been supported in FORTRAN II) would have went out to 110bps Teletype (10 characters per second). So this would have taken...
~5 minutes to load the program via paper tape
+ a few milliseconds to reach the print statement
+ 1.4** seconds to output the text
+ a few milliseconds to complete
** The system that this program would have run on would likely have supported flow control on terminal output. So, during the 1.4 seconds of print time, the operator could potentially have keyed in Ctrl-S to stop the printing, then gone out to lunch. How would you suggest the compiler estimate how long you would be out for lunch?
Some pieces of code, as mentioned by others, have execution times that are data dependent. This not only includes quantity of data, but in some cases the values of data. This is especially true when the (or part of the) code contains convergence routines. Poorly written convergence routines can potentially take a very long time to converge with certain input values, and in some cases may never converge. Better (subject to opinion) written convergence routines will return a best value nearest the desired value as opposed to within (absolute) delta of correct value.
As mentioned by others, your best estimate technique is to run tests with representative data sets (small model, medium model, large model) and then state: For these test data the runtimes were x, y, z.
Jim Dempsey
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The compiler does know, for optimization purposes, how many "cycles" a sequence of instructions takes, assuming no control flow, for the processor microarchitecture targeted. That doesn't help you, though, as it can't guess how many times a block of instructions will be executed based on external input. It's also an imprecise measure, as it can't take into account memory activity, processor stalls, etc.
Probably the best path here is to run the application under Intel VTune Amplifier XE, timing critical sections with input data that exercises different "lengths". Once you collect this data, you can do some rough calculations of how each increase in complexity affects run-time, then feed that back into your program to do whatever you intended to use this data for. There is no magic tool that will estimate this for you based on static source.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
This is akin to an interesting problem that used to occur on the olden days of mainframes. My wife's father had an interesting research problem for his masters thesis in 1965. He also had the problem he was limited to 20 minutes of mainframe time.
He could not solve his last problem - it took to long to run - I recoded the problem in Fortran and ran all his samples and timed them and compared the 65 results with the modern -- it would have taken about 30 minutes on the mainframe according to my analysis, he never solved the last problem - but I did.
You can do some test runs and develop a statistical analysis to estimate the run times -- not hard -- just not worth the effort unless you are really time crunched.
I did that recently on the Raspberry Pi - where I had to use two threads one to collect data which took 8 seconds and one to do FFT that had to be done in less than 8 seconds or one bottlenecked the program and it could not be bottlenecked. it was also in MONO so that is a bit slower. So I have a timing program now --
John

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page