- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Could you sombody who knows the VAST-F/openMP tool compare it with INTEL fortran compiler auto-parallelization feature from the computing performance point of view?
More info about VAST-F/toOpenMP is here:
http://www.crescentbaysoftware.com/vast_toOpenMP.html
and here:
http://www.crescentbaysoftware.com/docs/.
Some fortran gurus recommend this tool as a superior OpenMP parallelizer on source code level!?
Michal
Link Copied
7 Replies
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The importance of tools to help in conversion to OpenMP (rather than depending entirely on hidden auto-parallelism) is recognized in the Intel Composer products (which should include Fortran late this year). It's partly a question of whether you are willing to invest effort into effective parallelization and which type of tool may help you work most efficiently.
VAST has been on the market a very long time and hasn't made much inroad in the perceived need for more parallelization tools. As it wouldn't do the entire job on many applications, you would still require profiling and threading correctness tools.
VAST has been on the market a very long time and hasn't made much inroad in the perceived need for more parallelization tools. As it wouldn't do the entire job on many applications, you would still require profiling and threading correctness tools.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
This is a bit diplomatic answer. If I understand well, you said that VAST/toOpenMP conversion tool is not able to do this job effectivelly or not at all?
My question is if the auto-parallelization options for intel fortran compiler are able to do the relatively same job or not?
My question is if the auto-parallelization options for intel fortran compiler are able to do the relatively same job or not?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I think it's impossible to predict whether you will find any of these tools particularly helpful. You might consider attempting to arrange a trial period. I don't have a reputation for diplomatic answers!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
There are limitations to what can be parallelized automatically,you can often thread at a higher level using OpenMP and your knowledge of the program structure.
For a discussion of automatic parallelization be the Intel compiler, and how to help the compiler, see http://software.intel.com/en-us/articles/automatic-parallelization-with-intel-compilers/.
For a discussion of automatic parallelization be the Intel compiler, and how to help the compiler, see http://software.intel.com/en-us/articles/automatic-parallelization-with-intel-compilers/.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
An this is exactly my point:
VAST-F/toOpenMP is able to "automatically" parallelize the code on source level to add suitable OpenMP directives directly to the sourcecode. And INTEL compiler is able to autoparallelize the code via a bit different and hidden approach. These two methods are sometimescomplementary but sometimesmay be in strong contradiction. From the typical user point of view is much better the VAST approach which is possible manually modify on source level by user knowledge of the program structure and purposes.
VAST-F/toOpenMP is able to "automatically" parallelize the code on source level to add suitable OpenMP directives directly to the sourcecode. And INTEL compiler is able to autoparallelize the code via a bit different and hidden approach. These two methods are sometimescomplementary but sometimesmay be in strong contradiction. From the typical user point of view is much better the VAST approach which is possible manually modify on source level by user knowledge of the program structure and purposes.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Yes, if you are willing to take VAST-generated code with OpenMP directives as a starting point, and work on it yourself, that certainly makes sense. For that, you need some understanding of OpenMP, private variables, etc. Auto-parallelization is easier to use and requires less knowledge, but can't achieve as much. (Though there are directives you can use to help the compiler a bit).
More effort, more reward :-)
More effort, more reward :-)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Using your knowledge of how data flows through your application often has a larger impact on obtaining performance through parallization. Using an automated tool (VAST) or compiler autoparallelizers, and even using reports from profiliers (VTune, Parallel Inspector) will not incorporate the knowledge of your data flow into the parallelization efforts.
Autoparallelization (compiler)and profiliers (and I assume VAST) will tend to optimize from (or set your focus on) the inner most nesting layer. i.e. "hot spots". While this works some of the time, it is often not an effective strategy all the time. When using a profiler you should be looking up the call tree to see if it would be more effective to move the parallelization up a level or several levels. When using the knowledge of your application, your efforts begin from the outer layer working inwards. Many applications benefit from approaching the parallelization from both ends (nested parallelization in OpenMP-speak).
For some, VAST could be a good educational tool. As effective as attending a multi-day class, or having on-site training. And it may be useful in making a first pass parallelization effort. Your best performance will come from understanding your application and having attained a level of proficiency in parallel programming.
Jim Dempsey
Autoparallelization (compiler)and profiliers (and I assume VAST) will tend to optimize from (or set your focus on) the inner most nesting layer. i.e. "hot spots". While this works some of the time, it is often not an effective strategy all the time. When using a profiler you should be looking up the call tree to see if it would be more effective to move the parallelization up a level or several levels. When using the knowledge of your application, your efforts begin from the outer layer working inwards. Many applications benefit from approaching the parallelization from both ends (nested parallelization in OpenMP-speak).
For some, VAST could be a good educational tool. As effective as attending a multi-day class, or having on-site training. And it may be useful in making a first pass parallelization effort. Your best performance will come from understanding your application and having attained a level of proficiency in parallel programming.
Jim Dempsey

Reply
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page