Here is a small code:
INTEGER, PARAMETER :: wp=KIND(1.0d0)
REAL(wp), ALLOCATABLE, DIMENSION(:) :: rr
COMPLEX(wp), ALLOCATABLE, DIMENSION(:) :: cc
INTEGER :: n
n=1000000 !<= this value is OK
n=1100000 !<= this value causes error
rr(:)=REAL(cc(:), KIND=wp) !<= here is runtime error
! rr(:)=REAL(cc(:)) !<= it's OK
When I compile without any switch under ifort 16.0.2 program crashes. It runs well under -O3 optimization level and crushes under the
-O0, -O1 -O2 optimization levels.
It run well if one switch the working precision (wp) from DOUBLE PRECISION to SINGLE PRECISION (i.e. wp=KIND(1.0) )
Under ifort 16.0.1 everything is OK.
I get a stack overflow, avoided simply by boosting the stack limit.
No doubt that the update is causing you a slightly increased stack consumption. I see that an extra temporary array and copy is created at the lower optimizations. I don't know whether that would be accepted as a bug.
Setting stack unlimited ( ulimit -s unlimited) the problem for intel 16.0.2 has gone.
Regarding ifort 16.0.1. In standard stack size (8192 kB) even if I enlarge n 1000 times, program still runs well.
Indeed, the problem should be connected to the stack. If I decreased stack size to 5000 kB, then the program also crashed at n=1000 000.
In the case of ifort 16.0.2. The mentioned problem could be solved/avoided by using the -heap-arrays switch. But with the standard stack size (8192 kB) and with the -heap-arrays 100000 (means that temporary array larger then 100MB is allocated on heap memory) the program is running well. For me, there is something wrong or I do not understand the idea of the -heap-arrays switch