- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I am trying to optimize some legacy code which has some very long functions. Some functions have more than 1000 lines.
I wonder whether the length of a single function will affection the compiler optimization results?
Any test/experiment data?
Thanks
I wonder whether the length of a single function will affection the compiler optimization results?
Any test/experiment data?
Thanks
Link Copied
5 Replies
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The most evident problem is where the compiler hits one of its preset limits, and issues a warning that optimization has been disabled from there on. If additional functions are in the same source file, optimization remains disabled for those following functions. For this reason, and because interprocedural optimizations are likely to be ineffective in such cases, it may be necessary to avoid including multiple large functions in a single source file, and/or to disable automatic in-lining, e.g. by -fno-inline-functions, and/or to set the -override-limits option. With override-limits, the compiler doesn't guarantee against hangs and other bad effects (compilation may literally take hours, even without IPO).
Setting -O1 (no vectorization, compromise between code size and local performance) may be a necessary choice. It's better to set -O1 than have the compiler hit limits and effectively set -O0.
On the other hand, it may be necessary to increase certain limits (see icc -help entries on -finline-limit etc) or to use the forceinline so as to get in-lining of functions with large parameter lists when called inside inner loops.
So, yes, optimization problems have been observed with big functions, and there isn't a solution which works in all cases.
Setting -O1 (no vectorization, compromise between code size and local performance) may be a necessary choice. It's better to set -O1 than have the compiler hit limits and effectively set -O0.
On the other hand, it may be necessary to increase certain limits (see icc -help entries on -finline-limit etc) or to use the forceinline so as to get in-lining of functions with large parameter lists when called inside inner loops.
So, yes, optimization problems have been observed with big functions, and there isn't a solution which works in all cases.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Quoting - tim18
So, yes, optimization problems have been observed with big functions, and there isn't a solution which works in all cases.
Hi,
I just want to add a comment to tim18's explanation.
The problem of optimizations for very large programs is nothing that only ICC is faced with. Any compiler has certain practical limits of the code sizes it can handle. I remember an automatically created program that made GCC run for 4 hours and then abort.The algorithmsused within the compiler's optimization passes are sometimes of high complexity (with high being O(n^2) or even worse) and that creates a natural limit on the program sizes that a compiler may optimize without taking forever to run.
As tim18 wrote, lowering the optimization can help here, if by thencompilation succeeds within the pre-set timeframe and optimization results are still good enough.But that's something that really depends on the application codeand the user's needs.
Cheers,
-michael
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thank you for the above two posts. I tried to use -O1, I am using -O2 now, but the result is much worse. The run time is about 4 times longer than using -O2 flag.
Any other suggestion?
There lots data I need to transfer if I break up the long funtion. I am not sure that will help.
Any other suggestion?
There lots data I need to transfer if I break up the long funtion. I am not sure that will help.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Quoting - Zhu Wang (Intel)
There lots data I need to transfer if I break up the long funtion. I am not sure that will help.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Quoting - Zhu Wang (Intel)
I am trying to optimize some legacy code which has some very long functions. Some functions have more than 1000 lines.
I wonder whether the length of a single function will affection the compiler optimization results?
Any test/experiment data?
Thanks
I wonder whether the length of a single function will affection the compiler optimization results?
Any test/experiment data?
Thanks
1,000 lines in one functionis not overly large. What percentageof these lines are comments?
100,000 would be considered overly large.
I suggest as a compromise you run VTune or other profiler on the code using representative data.
Look at what code is seldom used, and see what is appropriate for lifting out to separate function(s).
Passing arguments or context pointer and the call overhead for code that requires < 1% of computational time will be inconsequential.
Jim
Reply
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page