Intel® Fortran Compiler
Build applications that can scale for the future with optimized code designed for Intel® Xeon® and compatible processors.
28381 Discussions

Severe: Variable ... too large for NTCOFF. Bigger than 2GB. Use heap instead -- Ivf compiler 10.1

xu_jocelyn
Beginner
2,234 Views

The same program runs correctlywith the 3D array of (32,32,512) (double) by thevs2005 plus intel visual fortran complier 10.1 (Intel 64 ). When the array is extended to (512, 512, 512) (double), however, if compiling,it shows that ---- Severe: Variable ... too large for NTCOFF. Bigger than 2GB. Use heap instead.

The computer has 8G physical memory. I have read these introductions in the document ---- On Intel 64 based systems running a 64-bit operating system, the maximum array size is limited by the size of the physical memory on the system plus any additional paging or swap space.

There must be some errors in setting. How could I deal with it?

Thanks!

0 Kudos
6 Replies
TimP
Honored Contributor III
2,235 Views
Static arrays are still limited to 2GB in the default memory model. The maximum array size referred to in the notes relates only to ALLOCATABLE arrays in the default model. You should read also the documentation on the heap_arrays switch, as the message suggests.
0 Kudos
Steven_L_Intel1
Employee
2,235 Views
On Windows, there is only one "memory model". Static code and data is limited to 2GB total, even on 64-bit Windows. This is a Microsoft restriction.

On Linux there are three memory model choices where you can optionally have more than 2GB of static code and data, but not on Windows (nor MacOS).
0 Kudos
Steven_L_Intel1
Employee
2,235 Views
Oh, and the /heap-arrays option applies only to temporary arrays the compiler creates. You will need to change your code to make these large arrays ALLOCATABLE and then ALLOCATE them to the correct size.
0 Kudos
xu_jocelyn
Beginner
2,235 Views

The program runs well when Imake the large arrays using ALLOCATABLE and ALLOCATE functionson Windows. Many thanks for all of your suggestions. :-)

Other questions---

1. For the same size data, if it is possible, does static code and data run faster thanALLOCATABLE arrays, that is whythe staticdataareadoptedmore often?

2.Whether on Linux choosingmore than2GB of static code and datais more beneficial than applying ALLOCATABLE arrays?

Thanks.

0 Kudos
Steven_L_Intel1
Employee
2,234 Views
The main advantage of static data (code is always static) is that no extra code needs to be generated to allocate and deallocate it. For arrays of any reasonable size, this is not relevant. In years past, accessing static data was faster but nowadays the difference is often hard to measure.

I would recommend using ALLOCATABLE for large arrays regardless.
0 Kudos
mel_de_leon
Beginner
2,234 Views

I got this message, and some by memory limits windows and I resolved by applying the following:
array-heap: 0
I apply dynamic blocks:
-Dynamic common blocks: blocks name
and also increase the memory in the linker
-heap reserve size: Bytes
-heap commit size: Bytes
-stack reserve size: Bytes
-stack commit size: Bytes

 

0 Kudos
Reply