- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
We have a simulation that we have validated the results using the following baseline:
redhat fedora core 3
gcc 3.4.2
intel fortran 9.0.024
kernel ver 2.6.12.1
We use optimization level 2 (-O2)
When we upgrad the fortran compiler to 10.1.015 or 10.1.018 and use optimization, our results are nowhere near expected. When we turn off optimization, our results are as expected. So, we built a system that is documented by intel as being tested as follows:
redhat fedora core 7
gcc 4.1.2
kernel ver 2.6.21
Intel fortran 10.1.015
Using optimization, our results are way out of the expected range, however when we turn off optimization, our results are as expected. We need to be able to optimize our code so that we can run near real time. With no optimization, we cannot even get close to real time.
Please advise.
Randy Suggs
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page