Intel® Fortran Compiler
Build applications that can scale for the future with optimized code designed for Intel® Xeon® and compatible processors.

Puzzling compiler message

WSinc
New Contributor I
1,128 Views

The following routine calls itself recursively -

I get a WARNING message about incompatible argument types,( line 11)

I don't understand this, since isq+1 and isq are both integer(1), and

inum1 and inum are both integer(1) as well.

 

Do you have any idea why I would see this?

Would it think that isq+1 is a different type?

Of course I could say: isq1=isq+1, then pass that along, if isq1 is also integer(1).

0 Kudos
6 Replies
IanH
Honored Contributor III
1,128 Views

billsincl wrote:

The following routine calls itself recursively -

I get a WARNING message about incompatible argument types,( line 11)

I don't understand this, since isq+1 and isq are both integer(1), and

isq may be integer(1), but the literal constant `1` is default integer - that is integer(4) for ifort.  When you add two integers together that are of mixed kinds, the result of the expression has the kind that has the greatest decimal exponent range.  Again, out of integer(1) and integer(4) it is integer (4) that has the greatest range.

Hence `isq + 1` is of integer(4), hence you have a kind mismatch.

`isq + 1_1` however...

0 Kudos
WSinc
New Contributor I
1,128 Views

OK, I could also say Isq+1_1 as you suggested.

But there is still the possibility of an integer overflow.

 

For example, suppose I said isq*127_1. The result is very likely to be WRONG, since it cannot be higher than 127.

So would it be better to change the argument to integer(2)?

 

Or is there an easy way to check for integer overflow? As far as I know you cannot turn on an overflow trap, you have to

insert checks for that explicitly.

0 Kudos
Steven_L_Intel1
Employee
1,128 Views

There is not currently a way to check for integer overflow. Is there a reason you've chosen a small kind here? Typically you do this only when accessing data structures defined with small integers. Otherwise you'll get better performance sticking to default integer kind.

0 Kudos
WSinc
New Contributor I
1,128 Views

Well, I reasoned that when doing VERY LARGE numbers of operations, the processing time would be reduced by comparing smaller

sized integers. Otherwise the default would be Integer(4).

 

But maybe this isn't strictly correct. It depends upon whether it STARTS with the least significant byte, or the MOST significant one.

The comparison might take 4 times as long with 4 byte integers, if the first three byte are all zeros in most cases.

 

This might also hold for ordinary operations, like + - / or * for example.

It would be interesting to run some test cases, see what the outcome is.

0 Kudos
WSinc
New Contributor I
1,128 Views

Here is a little run-time test case:

Contrary to what I said, I get really surprising results.

I don't see why taking comparisons of different integer sizes would give LONGER

CPU times for shorter integers.

But apparently that's the case. Run this yourself, see if you get the same thing.

 

When I run it repeatedly, I get pretty consistent answers

Each one of the three cases does 4 billion (roughly) comparisons - - 2**32 in fact.

I hope this gives a fairly reliable measure of actual CPU time.

0 Kudos
Steven_L_Intel1
Employee
1,128 Views

That's because the processor is optimized for default integer sizes. It actually takes longer to compare shorter integers.

0 Kudos
Reply