Intel® Fortran Compiler
Build applications that can scale for the future with optimized code designed for Intel® Xeon® and compatible processors.
29246 Discussions

Performance of debug vs. release versions

dboggs
New Contributor I
656 Views
Just for kicks, I decided to compare the debug and release versions of a program I am developing.

The program reads time series data from files on disk, performs basic statistics calcs (mean, std.dev., max, min), power spectral density,plots the time series points and power spectrum points on screen, and saves the results to new files.

There are 48 input files, containing about 3000 sets of data, and each set contains 4096 time series data points.

The debug version has an .exe size of 2.2 MB and executes in 2m30s.

The releaseversion has an .exe size of 1.3 MB and executes in 2m23s.

The majority of the time appears to be spent in plotting the data, but I don't have a breakdown.

Is this what I should expect? I am disappointed that the release version doesn't run much faster.
0 Kudos
4 Replies
IanH
Honored Contributor III
656 Views
What debug like settings do you have active? What are your settings in release?

I see significantly slower (as expected) execution times with something like /check:all turned on.

From what you write though, it also sounds like there's a fair bit of input/output activity. There's not much the compiler can do to make your hard disk spin faster. As you note, plotting is also reliant on the performance of things like the operating system's graphics subsystem, hardware, etc.
0 Kudos
tropfen
New Contributor I
656 Views
Hello,

have a look at the cpu usage. The performance of your hard disc my be the limit. Is your program using the full part of one cpu? Or is the usage much below? Opening and reading data from the hard disc is slowing down the perfomance.

Try to change the structure of your data sets. If you use ascii data, direct access data may increase your performance.

Frank
0 Kudos
dboggs
New Contributor I
656 Views
The settings for my debug build are:

/nologo /debug:full /Od /warn:interfaces /module:"Debug\" /object:"Debug\" /Fd"Debug\vc100.pdb" /traceback /check:bounds /libs:qwin /dbglibs /c /fpscomp:ioformat

The settings for my release build are:

/nologo /module:"Release\" /object:"Release\" /Fd"Release\vc100.pdb" /libs:qwin /c /fpscomp:ioformat

The data file format is true binary, not text not "Fortran unformatted".

I will try to experiment with the read routine to speed it up. Assuming that reading is a bottleneck, should I expect a significant difference between:

Method 1.
Integer :: A(BigLimit)
Do i = 1, npts
Read (LU, '(I10)') IntArray(i)
End do

vs. Method 2.
Integer :: A(BigLimit)
Read (LU, '(I10)') (IntArray(i, i = 1, npts)

vs. Method 3.
Integer :: A(BigLimit)
Read (LU, '(I10)') IntArray(1:npts)

Incidentally, the performance I'm seeing is satisfactory; I'm just trying to learn about the value of making a release version vs. the simpler route of just sticking with debug versions.

0 Kudos
TimP
Honored Contributor III
656 Views
Your project settings won't have much effect on the performance of read. Your methods 2 and 3 should be equivalent. Method 1 shouldn't have the same effect, let along the same performance. You may want /assume:buffered_io
0 Kudos
Reply