icc 13.0.1 20121010, 64-bit Linux
Test cases are from the Linux kernel.
1. Consider this simple program. icc miscompiles it.
------------------------------------------------------------------------
int count(int i)
{
if (i++ >= 0x7FFFFFFF)
__builtin_trap();
return i;
}
#include
#include
int main(int argc, char **argv)
{
int x = atoi(argv[1]);
printf("%d\n", count(x));
}
------------------------------------------------------------------------
This is the expected result, compiled with -O0.
$ icc -O0 t.c
$ ./a.out 0
1
This might be an icc bug with -O2.
$ icc -O2 t.c
$ ./a.out 0
Segmentation fault (core dumped)
icc incorrectly folds (i++ >= 0x7FFFFFFF) into true. Do you happen to know why?
2. Consider a slightly different version.
------------------------------------------------------------------------
int count(int i, int max)
{
if (i++ >= max)
__builtin_trap();
return i;
}
#include
#include
int main(int argc, char **argv)
{
int x = atoi(argv[1]);
int max = atoi(argv[2]);
printf("%d %d %d\n", x, max, count(x, max));
}
------------------------------------------------------------------------
The behavior changes from -O0 to O2.
$ icc -O0 t.c
$ ./a.out 2147483647 2147483647
Segmentation fault (core dumped)
$ icc -O2 t.c
$ ./a.out 2147483647 2147483647
2147483647 2147483647 -2147483648
icc turns (i++ >= max) into (++i > max). Strictly speaking, this is not a bug in icc as per the C standard, because
1) signed integer overflow is undefined behavior, and
2) icc is allowed to assume i++ never overflows.
My question is, does icc provide anything like gcc's -fno-strict-overflow or -fwrapv to disable such optimizations based on signed integer overflow?