- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I've downloaded the Linux Fortran v8.1 compiler build 20041118Z as well as the Windows Visual Fortran build 20040802.
I am coding a numerical integrator which implements an nd Newton's method solver. The code has been compiled under Visual Studio .net 2003 and using a linux makefile with the default real and double kinds set to 128 bit. There is, however, a remarkable difference in accuracy between the two programs. Any constants that I've calculated are identically equal (up to default printout precision > 30 digits). The linux solver appears to have an upper accuracy limit of 27 digits, whilst the windows one has 31 digits.
I know for the default real kind, there's a difference in precision between the two (http://support.intel.com/support/performancetools/fortran/sb/CS-007783.htm).
Should this affect REAL(16), if so are there any compiler flags which would facilitate consistent behaviour?
Cheers,
--
Steven Capper
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
If the difference persists with current compilers, please let us know through Premier Support and supply a sample.
The Windows compiler you have is several months older than the Linux compiler, and this could account for some differences. You should also look to see which options are set implicitly in the Visual Studio environment - you can see this under Command Line in the Fortran property tab.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The age difference is something I've not considered for the Windows compiler. I will look in to that. I've checked the Visual Studio compile flags and there's nothing out of the ordinary that I can see. At the moment I'm working with a debug build so there are no fancy compiler options other that the real, double 16 default types.
I think I'm going to have to locate where the apparent "precision loss" starts under linux. Unfortunately that won't be easy as there's a lot of code to go through.
Are there any sanity check benchmarks that you can recommend for fortran, as there may be a problem with my linux libraries (I'm running Gentoo)?
Cheers,
--
Steven Capper
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I would not trust any "sanity tests" you happen to find - most of them are flawed in one way or another.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Do the Intel linux libraries have and dependence (for maths operations) on any of the system libraries?
Cheers,
--
Steven Capper
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Windows: 20040802Z
Linux: 20040803Z
And the code now runs perfectly.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page