Intel® Fortran Compiler
Build applications that can scale for the future with optimized code designed for Intel® Xeon® and compatible processors.

Perils of constants as arguments

dboggs
New Contributor I
1,211 Views

I am writing a subroutine intended for redistribution to other coders in a static library:

Subname (irow, icol, ra, rb, ...)
   :
   IF (irow == 0) irow = INT(ra)
   IF (icol == 0) icol = (..something..)
   :
END Subname

The first two arguments, irow & icol (among others), are altered with the intent of returning updated values to the caller. The concern is that for some users, and in most cases, the routine case is

irow = 0
icol = 0
CALL Subname (irow, icol, ra, rb, ...)

But some lazy coders, thinking brevity is advantageous, will write

CALL Subname (0, 0, ra, rb, ...)

I know from my early days in F77 that this practice is a no-no and asks for trouble. There is info available on internet that the actual behavior is highly compiler-dependent and goes on to describe various unintended side effects that can happen. What is the situation with IVF? I have had mixed results. Should it always work, or never, or sometimes? How best to protect careless reusers of the library code against catastrophic crashes? It can probably be done via appropriate compiler switches in a debug build, but I need a release build with minimum baggage.

What is the best recommendation?

0 Kudos
14 Replies
Steven_L_Intel1
Employee
1,211 Views

Current (as in since 10.0) IVF will get you an access violation if you do this. We do have an option /assume:noprotect_constants which will pass a writable copy of constant actual arguments. I would not bother trying to test for this. Ideally you'd have an explicit interface with INTENT(INOUT) - if that interface is visible to the caller, Intel Fortran will give an error like this:

t2.f90(1): error #6638: An actual argument is an expression or constant; this is not valid since the associated dummy argument has the explicit INTENT(OUT) or INTENT(INOUT) attribute.   [2]
call sub (2)
----------^

You can force them to do so by making them all module procedures. This would be the best methodology.

0 Kudos
andrew_4619
Honored Contributor III
1,211 Views

@Steve: Just an observation but if we call a subroutine using an argument that is a constant and an explicit interface (via a USE) is present, and we then make assignment to that argument in the subroutine then:

a) If the dummy arg has INTENT OUT or INOUT we get a compile error.

b) If INTENT is not specified we get no error or warning. (we can get a run time error....)

Does the latter make sense? I warning might be useful as it is not a smart thing to do. I concede that the code might still be OK as the program logic might mean the assignment is never made for some specific case. I like compiler warnings that point out dumb code and on all new code or updates I aim to always ensure INTENT is specified.

 

0 Kudos
IanH
Honored Contributor III
1,211 Views

app4619 wrote:

@Steve: Just an observation but if we call a subroutine using an argument that is a constant and an explicit interface (via a USE) is present, and we then make assignment to that argument in the subroutine then:

a) If the dummy arg has INTENT OUT or INOUT we get a compile error.

b) If INTENT is not specified we get no error or warning. (we can get a run time error....)

Does the latter make sense? I warning might be useful as it is not a smart thing to do. I concede that the code might still be OK as the program logic might mean the assignment is never made for some specific case. I like compiler warnings that point out dumb code and on all new code or updates I aim to always ensure INTENT is specified.

(Note you can get an explicit interface in ways other than by USE'ing a module - with those other ways, the compiler may not have seen the code for the procedure at the point of the procedure's invocation or reference.)

If the INTENT is not specified, then whether the procedure can define the dummy or not depends on the nature of the actual argument.  Whether the procedure defines the dummy or not depends on the statements that the procedure executes.  Consider something like.

[fortran]SUBROUTINE modify_with_permission(arg, flag)

  INTEGER :: arg

  LOGICAL :: flag

  IF (flag) arg = 1

END SUBROUTINE modify_with_permission

...

CALL modify_with_permission(1, .FALSE.)    ! This is ok.

CALL modify_with_permission(1, .TRUE.)    ! This is not.

[/fortran]

It is a tall order to ask a compiler to simulate the execution of a program when compiling - because that may actually require execution of the program.

0 Kudos
andrew_4619
Honored Contributor III
1,211 Views

Your example is exactly the case I fixed earlier today. But I don't think it is a tall order because the compiler complains (error) if the INTENT has OUT simply because you modified the arg in the subroutine code, This is irrespective of if it would be executed or not. I guess what I am saying is the generated/supplied interface could have an 'implied intent' where it is not explicitly stated and generate a warning if this is violated.

0 Kudos
Steven_L_Intel1
Employee
1,211 Views

Omitting intent is not the same as INOUT - INOUT requires that the actual argument be "definable", whereas omitted intent does not.

I've often thought about generated interfaces trying to deduce an intent from the code, but there are too many cases where it just can't tell, for example, if assigning to argument A is conditional on the value of argument B. Furthermore, I'd rather programmers move to providing their own explicit interfaces, through modules, contained routines or interface blocks (only if you must). A suggestion I like better is an option to warn if an explicit interface isn't visible for any routine call.

0 Kudos
andrew_4619
Honored Contributor III
1,211 Views

Steve Lionel (Intel) wrote:
...but there are too many cases where it just can't tell, for example, if assigning to argument A is conditional on the value of argument B.

I would be inclined to agree with that but for the fact that the Intel compiler can do that already in so far as that case already generates an error if explicit INTENT is violated in such a way as you describe. So looking at it the other-way, if you assign/modify  a variable in a sub programme there is an implied out Intent, however to my thinking violation of "implied intent" would generate a warning rather than an error and push the programmer to making the code more robust.

Steve Lionel (Intel) wrote:
A suggestion I like better is an option to warn if an explicit interface isn't visible for any routine call.
I would agree with that but it doesn't help this case as you can have an explicit interface without explicit intent! Perhaps an option to warn where Intent is not specified......

Having recently "modulised" a few hundred remaining routines in my main application and having now achieved 100% explicit interfaces I do have a wish  to have 100% specified intent also. This will be a gradual process but at some point when this is mostly achieved a warning  for no intent would be useful to enforce good practice on future new/modified code....

0 Kudos
John_Campbell
New Contributor II
1,211 Views

app4619,

Having recently "modulised" a few hundred remaining routines in my main application and having now achieved 100% explicit interfaces I do have a wish to have 100% specified intent also

I am very interested in your approach for achieving 100%, as I tried to achieve this but gave up. The problem I had then was with existing libraries plus a reluctance to contain routines in modules. I also tried a universal interface block as an include file, but you can't have an interface block defined inside the same routine ( a missed opportunity for code checking).

The best solution I thought I found was to create a single file which used include for all the other files in the project, then compiled this file so the compiler saw all routines in a single file. I assumed the compiler would do all the cross checking.

The big danger with interface definition is when you change the argument list for a routine, then you don't have the time to look for all occurrences of an explicit interface defined for this routine. Writing interfaces is not the last thing you do when creating code!

I'd be very interested to hear how you have approached this and achieved 100% success,

John

0 Kudos
IanH
Honored Contributor III
1,211 Views

John Campbell wrote:
...The problem I had then was ... a reluctance to contain routines in modules...

Why?

0 Kudos
DavidWhite
Valued Contributor II
1,211 Views

Steve,

You commented "I'd rather programmers move to providing their own explicit interfaces, through modules, contained routines or interface blocks (only if you must)."

Can you elaborate, perhaps indicating what best practice should be for this?

Thanks,

David

0 Kudos
Steven_L_Intel1
Employee
1,211 Views

Mainly I meant use modules for everything - no external procedures.

0 Kudos
John_Campbell
New Contributor II
1,211 Views

Steve,

Would your approach be to convert libraries into a module defining the data-structure for that library then contain all the routines in that library ? You would then USE the module ?
Where would this library module be stored ?

I have an approach of storing libraries in other folders as a stable code, converting the .obj to .lib, then linking when and what is required. (Actually for ifort, I link the \lib_path\*.obj files.) I have between 4 and 10 libraries for most projects, which have the same stable code that I have been using for many years. For example the libraries I use include:

  • file management library
  • free format I/O library
  • binary file structure file
  • matrix and vector library
  • general graphics library
  • plotter graphics library
  • windows graphics library
  • statistics functions library
  • system timing library
  • project utility library

I presume the module approach would link everything, or would I have to create a new code set for every new project.
Ian, that's my reason why.

John

0 Kudos
IanH
Honored Contributor III
1,211 Views

I don't see how putting procedures into modules really changes things.

For each library I have a few (which may be one) "top level" modules that expose the types and procedures that the client of the library is expected to work with in some sort of convenient grouping (those top level modules may then use lower level modules that do the grunt work - but that's implementation detail that clients of the library aren't supposed to be aware of). 

When it makes sense to pre-compile the library, I compile the library code to generate the mod files and obj files.  I use the librarian utility to put all the obj files into a lib (or sometimes I use the linker to make a DLL and import library).  The mod files (with ifort it has to be all the mod files, not just the top level ones that the client is expected to use :( ) and lib file are placed (or created) in a compiler + platform + configuration specific directory. 

When I compile client code (as in code that relies on the library) I reference the compiler + platform + configuration specific directory in the Fortran > General > Additional include directories property.  I also supply that directory in the Linker > General > Additional Library Directories property, and the name of the lib file in the Linker > Inputs > Additional Dependencies property (i.e. the same as you probably do now). 

(The directory for mod files doesn't have to be the same one for the lib file.  The compiler+platform+configuration directory for mod files and the same directory for lib files could be shared across multiple libraries, but then you have to be careful about filename clashes.)

Source code that needs to reference stuff from the library USE's the appropriate top level module.

To try and make things less specific to a particular machine I use environment variables to locate the "root" of the source/build tree for a particular library.  This is handy for third party libraries that fit this sort of model in some way - because I can use the environment variable to switch between different versions on my machine.

So really, the only additional complication is the need to reference the directory with the mod files in the project using the library.

In other cases (more often than not) I simply have the source code control system drop the code for the library into a subdirectory of the project that is using the library and just add the source files and build that library as part of the normal build process for the overall solution.

If you are into distributing your libraries to others in a pre-compiled form then use of modules may mean that you need to distribute compiler specific mod files (you still may not need to if your procedures are BIND(C)).  But in many cases, unless you really limit what you use, the object code is compiler specific anyway.

For clarity around the "link everything" comment - for a time the smallest unit that the windows linker could include or exclude from an EXE or DLL was a single object file (and note that perhaps this time is back before the invention of the transistor).  In the case where that object file had been bundled up with other object files into a lib file the linker could still work at the original granularity of the object file, but no finer.  This object file granularity doesn't have to be the case today, but I don't know whether ifort takes advantage of that.  Assuming it doesn't (i.e. object file granularity still applies) then if you put every procedure in a library actually into the one module (i.e. after processing INCLUDE lines you effectively had one whopping great big source file) then you might find that the linker includes lots of unnecessary or dead code in the final exe.

But that one whopping big file setup is not the same as having a top level module (stored in its own file - hence compiled to its own obj) that USE's a whole heap of lower level modules (perhaps each stored in their own file - hence compiled to separate obj files).  The linker doesn't care about modules - mod files are for the compiler - the linker just works with obj files.

(And note with cross file interprocedural optimization this granularity aspect is moot anyway.)

0 Kudos
andrew_4619
Honored Contributor III
1,211 Views

John Campbell wrote:
I am very interested in your approach for achieving 100%, as I tried to achieve this but gave up. The problem I had then was with existing libraries plus a reluctance to contain routines in modules. I also tried a universal interface block as an include file, but you can't have an interface block defined inside the same routine ( a missed opportunity for code checking).

There is some useful comment in some of the posts above. But my experience was as follows (I must add there is absolutely nothing at all clever here):

The application has a number of modules containing data declarations  and routines in logical groups (OS utilities, math/vector, graphics etc). I tend to go for a source file with just one module as it keeps dependencies simpler. The easiest way for all the external procedures is to have them all in one source file and stick a module with contains around them followed by a lot of adding USE statements around the rest of the app. No rocket science there! I didn't simply do that because compile time for source files is non-linear with size so I like to limit sources to a max of about 10,000 lines of code. I split them into three source files, lets say Module A, B and C where B USES A and C uses B you then need to "USE C" in some other parts of the application. The tricky bit is moving routines around from file to file to avoid any circular dependencies... It took a few hours to sort but:

a) The overall build time seems quite a bit faster.

b) I now have explicit interfaces for everything so have better code checking.

 

 

 

 

0 Kudos
jimdempseyatthecove
Honored Contributor III
1,211 Views

A method I use to avoid circular dependencies is to lift the code that cause the circular dependencies out of the module that declare the data and interfaces. Example: if A uses B, B uses C, C uses A then extract the subroutines and functions that require the USE ... from the respective modules, insert the INTERFACE for those in those places then place those codes into a library project. The compiler will then not experience a circular dependency.

Jim Dempsey

0 Kudos
Reply