Intel® Fortran Compiler
Build applications that can scale for the future with optimized code designed for Intel® Xeon® and compatible processors.

How do you turn off argument checking?

robert_elward
Beginner
1,655 Views

I have a old scientific Fortran program that made liberal use of allocating one very large array in the main routine and then chopping up the array (memory) for smaller use by passing multiple chunks to Subroutines by using an offset.  The array in the main program was of type real or double precision and in the receiving routine it could be defined as integer, real or double precision.  Also, in the receiving program the dimension of the array could be a single dimension or multi-dimension.

 

How do I tell the complier to not check the arguments for this type of inconsistency?  I would like to be able to do this for the entire program in one place.

 

Here's and example:

 

       program mymain

       dimension a(20)

       n1=1

       n2=n1+7

       n3=n2+6

       n4=n3+3

       call dowork( a(n1), a(n2), a(n3), a(n4) )

       stop

       end

       subroutine dowork( ifirst, asecond, ithird, afourth)

       dimension ifirst(7), ithird(3)

       double precision afourth

       dimension asecond(2,3), afourth(2)

c      Do work

       return

end

 

0 Kudos
7 Replies
Steven_L_Intel1
Employee
1,655 Views

Fortran > Diagnostics > Check Routine Interfaces > No. But I suspect you are unaware that your code may be giving wrong results due to the mismatch - especially single-double.

0 Kudos
jimdempseyatthecove
Honored Contributor III
1,655 Views

Try /warn:nointerfaces

Generally, when interfaces NOT used, and when the called subroutine/function is NOT visible to the compilation unit containing the CALL, then the arguments will not (cannot) be checked. An exception to this is when you enable multi-file IPO, which is on by default in newer releases. Therefore you may also need to disable multi-file IPO.

Note, except for unusual circumstances you should fixe the code.

Jim Dempsey

0 Kudos
robert_elward
Beginner
1,654 Views

Jim and Steve,

Thanks for the very quick response.  This takes care of my issue.

Well aware of what turning this off does.  But back in the seventies this style of Fortran programming was done all the time on very large scientific programs used for the Finite element Analysis were the problem size changes dynamically from analysis to analysis and there was no dynamic memory allocation in Fortran.

I'm showing my age here...but still programming :).

Thanks,

Bob

0 Kudos
Steven_L_Intel1
Employee
1,655 Views

True, but it was also often the case, back in the 70s, that single and double precision had the same layout in the first 32 bits and that the second 32-bits were just extra precision. That is not the case with IEEE floating point.

Also, manipulating integers as floats can cause the hardware to change the values if the bit patterns look like denormals or NaNs. Code such as you showed in the first post would give wrong answers on modern hardware.

0 Kudos
dboggs
New Contributor I
1,654 Views

I too utilized this technique of "primitive dynamic memory allocation" for exactly the reasons you describe. And I have not (yet?) "fixed" my main code that did this. I guess I have not really absorbed why "fixing" it is so important. I do get the idea, but a rundown on the risks would be useful here.

Retaining this type of memory allocation means fewer changes to the existing code. Changing it in some cases means major changes, and with that is the risk of introducing new bugs.

In new programs I utilize true dynamic allocation, but I'm not so sure it is a good idea to change old ones--at least if they are humongous and have undergone years of debugging and proof testing.

(I also feel there are some advantages of this old-fashioned method, but I won't get into that now.)

0 Kudos
Steven_L_Intel1
Employee
1,655 Views

This home-grown dynamic allocation is not the issue - keep using that if you like. But argument data type mismatches can bite you.
 

0 Kudos
John_Campbell
New Contributor II
1,655 Views

I agree with dboggs, that minimising changes of old proofed code is the way to go.
You should also be careful with cosmetic changes to modern syntax, as some changes can bring problems, especially if changing "dimension ifirst(1)" to "dimension ifirst(:)", rather than "dimension ifirst(*)".
I have seen the use of (:) can change the assumptions of contiguous memory, which was the basis of the 70's style approach.

Steve's point is also valid, but not as general. I have not seen any use of mixing real and double precision variables or constants where the information is required to be transferred. For most compilers this would have always caused an error. Unfortunately, in my experience mixing of integer kinds is more common and can lead to portability problems. This mixed kind problem more often applies to variables and not arrays, which is the basis of the 70's approach.

John

0 Kudos
Reply