Intel® Fortran Compiler
Build applications that can scale for the future with optimized code designed for Intel® Xeon® and compatible processors.

compilation times

burgel
ビギナー
608件の閲覧回数
I'm running ifort (Linux) on an SGI Itanium box (1.3GHz), and I also have a long subroutine that takes a long time to compile with just -O1. The code is a subroutine with about 14,000 lines, mostly short loops that operate on 1-D "gather" arrays that contain values from 3-D fields where there is actually some work to be done. At -O1, it actually issues a warning:

Warning: Optimization suppressed due to excessive resource requirements; contact Intel Premier Support

I get around this with the -override_limits option, but the compiler (version 8) takes 75 minutes to compile the code and by the end is using about 8GB of memory. Version 9 just became available, and it takes over 4 hours to compile, using the same options! By contrast, XLF compiles the same code in about 5 minutes and gives comparable performance on a 2.5GHz G5 machine.

The Question: My suspicion is that -O1 turns on some global optimizations that are choking it because of the code length. Is there an equivalent set of options combined with -O0 that will do the same thing as -O1 so I can test removing different options? The users guide for v9 says that -O1 sets the following option:

-unroll0 -nofltconsistency -fp

I don't see how these would cause such a long compile time, however. (For reference, -O0 takes about 30 seconds to compile.)

Any advice would be appreciated. This long compile time is really ridiculous. And I don't think it is giving accurate results, either.

-- Ted

Message Edited by burgel on 06-27-2005 11:11 AM

Message Edited by sblionel on 06-27-2005 02:23 PM

0 件の賞賛
3 返答(返信)
Steven_L_Intel1
従業員
608件の閲覧回数
Please submit an example of this problem to Intel Premier Support. I don't think there is a set of switches that will help you - apparently there is some internal analysis taking too much memory, causing excessive swapping and the long compile times.

Rather than trying switches, please let us look at it so that we can see what is going wrong. We MAY be able to suggest a compiler internal switch as a workaround.
TimP
名誉コントリビューター III
608件の閲覧回数
Due to customer demand, IPF -O1 has been developed into a code size minimizing option. Dead code elimination phases have become quite time consuming. You should see improved run time performance. As Steve said, there may be internal switches to give up some optimization.
"it would be nice if" the compiler didn't give up entirely on optimization when it detects excessive use of resources, and if it could restore some optimizations when it is done with the troublesome subroutine. Splitting the source into individual subroutines is a common way of helping the latter situation.
burgel
ビギナー
608件の閲覧回数
This piece of code is pretty much one giant subroutine (part of a thunderstorm simulation model), so splitting off bits would be a good bit of work to accomplish. I love for the experts to look at it to see what is going on.

- Ted
返信