- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Consider the following simple code:
use, intrinsic :: iso_fortran_env, only : IK => int8 integer(kind=IK) :: foo foo = int( 256, kind=kind(foo) ) end
Intel Fortran compiler gives no warnings with above code:
xx>ifort /c /standard-semantics /warn:all /stand p.f90 Intel(R) Visual Fortran Intel(R) 64 Compiler for applications running on Intel(R ) 64, Version 18.0.0.124 Build 20170811 Copyright (C) 1985-2017 Intel Corporation. All rights reserved. xx>
Now consider the following where an initialization expression is used:
use, intrinsic :: iso_fortran_env, only : IK => int8 integer(kind=IK) :: foo = int( 256, kind=kind(foo) ) end
The compiler then issues a warning, #6047:
xx>ifort /c /standard-semantics /warn:all /stand p.f90 Intel(R) Visual Fortran Intel(R) 64 Compiler for applications running on Intel(R ) 64, Version 18.0.0.124 Build 20170811 Copyright (C) 1985-2017 Intel Corporation. All rights reserved. p.f90(3): warning #6047: The BYTE / LOGICAL(KIND=1) / INTEGER(KIND=1) value is out-of-range. integer(kind=IK) :: foo = int( 256, kind=kind(foo) ) -----------------------------^
Can someone from the Intel Fortran team please explain why the warning in the second case and not in the first?
Thanks,
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I can tell you - the Fortran-specific "front end", that processes compile-time arithmetic, can catch overflows in many cases . But the multi-language code generator has no support for run-time integer overflow detection. It has been on the "wish list" for many years, but never implemented by the code generator developers.
It's not clear to me if this specific case could be detected by the front-end. Maybe. But the general case of run-time integer overflow detection is not available.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Sometimes, the situation is as if the compiler and the runtime are running on different processors. Years ago, I ran into the following problem. A program used 1D70 as a guard value, to represent infinity or "not yet set". Variables were set to this value using a DATA statement, and the same value was used with the same meaning in formatted input data.
PROGRAM REALBUG DOUBLE PRECISION SIGMA,BND DATA SIGMA /1.0D70/ READ (*,*) BND write(*,10)BND,SIGMA,BND.GT.SIGMA 10 format(1x,1p,D24.17,' > ',D24.17,' ? ',L5) END
When the program was run, and the input was "1.0D70", the output was
1.00000000000000000D+70 > 1.00000000000000000D+70 ? T
The decimal to IEEE conversion in the compiler and the similar conversion routine of the RTL I/O routines produced a difference in the least significant bit of the mantissa, and the comparison gave an unexpected answer.
This problem occurred with a compiler from a different vendor than Intel.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I've seen the "different results from compile-time and run-time" evaluation often. If you identify such issues, do report them.
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page