<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic The Perils of Real Numbers (Part 2) in Intel® Fortran Compiler</title>
    <link>https://community.intel.com/t5/Intel-Fortran-Compiler/Visual-Fortran-Newsletter-Articles/m-p/844709#M62678</link>
    <description>&lt;P&gt;June 2001&lt;/P&gt;
                  &lt;H3&gt;&lt;A name="Eklund" target="_blank"&gt;&lt;/A&gt;The Perils of Real Numbers (Part 2)&lt;/H3&gt;
                  &lt;H4&gt;Dave Eklund&lt;BR /&gt;



                    Compaq Fortran Engineering&lt;/H4&gt;
                  &lt;P&gt; In Part 1 we offered the 
                    following problematical program: &lt;/P&gt;
                  &lt;PRE&gt;
      i = 1000000013 
      x = i
      type 1, i, x
1     format(1x,i,1x,f20.5)
      end&lt;/PRE&gt;
                   which gives: 
                  &lt;PRE&gt; 1000000013     1000000000.00000&lt;/PRE&gt;
                   
                  &lt;P&gt; Where did the "unlucky 13" go!? Why does it come 
                    back if we use /real_size:64? Let's look a little more closely 
                    at the distribution of integer and real numbers. You will 
                    recall that any integer is represented simply as the sum of 
                    POSITIVE (and zero) powers of 2, and there is no exponent 
                    field. This results in a flat distribution of values from 
                    -2**31 all the way up to 2**31-1, or -2147483648 up to 2147483647. 
                    Every integer value between these end points is included. 
                    There is only one value of zero. There is one value which 
                    does not have a counterpart of opposite sign (-2**31). Notice 
                    that this means that all of the integer values are "evenly 
                    spaced" across the entire range. &lt;/P&gt;
                  &lt;P&gt; The same general statements hold for all the other integer 
                    types (KIND = 1, 2, and 8 or their non-standard namings: integer*1, 
                    integer*2 and integer*8). All evenly spaced, and no exponent 
                    field. Not having an exponent field means, in effect, that 
                    there are 31 contiguous bits of "value" in an integer, 
                    whereas there are only 24 such bits in a real number (the 
                    23 fraction bits and the hidden bit). In a real number the 
                    rest of the bits are sign (1 bit) and exponent (8 bits). &lt;/P&gt;
                  &lt;P&gt; So let's look at what whole numbers we can represent as 
                    a real number. Well, we already know that we can represent 
                    any "small" whole number. In fact there is no difficulty 
                    whatsoever representing any whole number up to 2**24. But 
                    then something unusual happens. Take the following program: 
                  &lt;/P&gt;
                   
                  &lt;PRE class="FtnCode"&gt;
    integer :: two_24 = 2**24
	  
    do k = -2, 2
    i = two_24 + k
    type 1, i, i, float(i), float(i), float(i)
1   format(i9,1x,z9,1x,f12.1,1x,b33.32,1x,z)
    enddo

    end&lt;/PRE&gt;
                  &lt;P&gt;The program prints the whole numbers just before and after 
                    2**24 as integers and as real numbers. The result is shown 
                    below: &lt;/P&gt;
                  &lt;PRE class="FtnCodeSmall"&gt;
Integer:   in hex:  Real number:      Real in binary: 

16777214    FFFFFE   16777214.0  01001011011111111111111111111110 
16777215    FFFFFF   16777215.0  01001011011111111111111111111111
16777216   1000000   16777216.0  01001011100000000000000000000000
16777217   1000001   16777216.0  01001011100000000000000000000000 
16777218   1000002   16777218.0  01001011100000000000000000000001 

&lt;/PRE&gt;
                  &lt;P&gt; While we had no difficulty representing the value 2**24+1 
                    as an integer, it was quite impossible as a real number. The 
                    integer value in hex is: 1000001 -- notice that the first 
                    and last "1" bits are 25 bits apart! And this is 
                    not possible with the 24-bit fraction field of the real number! 
                    Hence 16777217 is the first whole number that we cannot represent 
                    as a real. Looked at another way, 16777215 is the last "odd" 
                    whole number that can be represented as a (single precision) 
                    real. Trivia buffs, rejoice! &lt;/P&gt;
                  &lt;P&gt; From 2.**24 up to 2.**25 we can only represent every other 
                    whole number (all the even ones) -- we step by two. From 2.**25 
                    up to 2.**26 we can represent every fourth whole number (all 
                    those evenly divisible by 4.). And so it goes. By the time 
                    we get up to 1000000013. (the number in the first example 
                    above), the two closest representable real numbers are: 1000000000. 
                    (4E6E6B28 in hex) and 1000000064. (4E6E6B29 in hex) which 
                    are 64. apart! &lt;/P&gt;
                  &lt;P&gt; The thing to remember is that as the real numbers get larger, 
                    they get further and further apart! That low order bit in 
                    the fraction gets to represent larger and larger "steps" 
                    between adjacent numbers. The "step size" is directly 
                    determined by the exponent field value. You will find that 
                    real numbers are really "dense" near zero. In fact 
                    very close to 50% of the real numbers lie between -1.0 and 
                    1.0! The same is true for double precision. With double precision 
                    instead of 23 fraction bits (and a hidden bit) we have 52 
                    bits (and a hidden bit). This allows us to express all the 
                    whole numbers up to 2**53, but not 2**53+1 . This is why /real_size:64 
                    causes the original example to "work" (not lose 
                    the unlucky 13)! &lt;/P&gt;
                  &lt;P&gt; In fact, since double precision has 53 fraction bits, ANY 
                    32-bit integer value can be represented EXACTLY as a double 
                    precision value. Similarly any integer(kind=2), which is a 
                    16-bit integer, can be represented EXACTLY as a real (24 covers 
                    16 just as 53 covers 32!). &lt;/P&gt;
                  &lt;P&gt; Does this mean that real numbers are "less precise" 
                    as we get further from zero? Curiously enough, the answer 
                    is no. While the representable numbers are further apart, 
                    they still have exactly the same number of "significant 
                    bits" -- 24 or 53 for real and double precision respectively. 
                    Significant bits? What about significant dights? When we talk 
                    about "significance", we are talking about the number 
                    of leading non-zero bits (or digits) that are known to be 
                    "present" or fully representable. Remember that 
                    we were able to express 16777216 
but not 16777217 as a real? 
                    Well, the 1677721 part (24 bits, 7 digits) were significant, 
                    but that last digit, alas, is imprecise and cannot be represented 
                    in the real number format. For those who love the details, 
                    since it takes log_base2(10) bits to represent any 1 digit 
                    (3.321928 bits per digit), then 24 bits gives us 7.224720 
                    digits--or 7 significant digits. And for double precision 
                    53 bits gives us 53.*LOG10(2.) or 15.95459 digits -- 15 significant 
                    digits (nearly 16). &lt;/P&gt;
                  &lt;P&gt; So you are saying that no matter what the real number, there 
                    are always 7 significant digits? Well, yes and no (nobody 
                    ever said this was simple!). There are three major exceptions: 
                    denormalized numbers, +-Infinity, and NaN (Not a Number). 
                    All of these anomolies are recent arrivals on the hardware 
                    scene. So recent, in fact, that the Fortran Standard does 
                    NOT require them, nor pin down their behavior! &lt;/P&gt;
                  &lt;P&gt; For a long time hardware designers were content with integer 
                    and then real data types and ever faster computers to manipulate 
                    them. But there were those who wanted more; those who were 
                    not content that dividing by zero caused their programs to 
                    ABEND (die for you youngsters). Those who wanted to be able 
                    to express 1.0/0.0; those who could visualize 0.0/0.0 (NOT 
                    to be confused with visionaries). Ah, what evil lurks... And 
                    so there came to be the IEEE Standard for Binary Floating-Point 
                    Arithmetic or ANSI/IEEE Std 754-1985. &lt;/P&gt;
                  &lt;P&gt; In this standard you would find definitions of number formats, 
                    basic operations, conversions, exceptions, traps, rounding, 
                    etc. Most modern machines provide hardware (and software) 
                    that conform to this standard. Portability, efficiency and 
                    safety are some of the most important stated goals of this 
                    standard. However, the introduction of +-Infinity and NaN 
                    brought a whole new set of possibilities and problems. &lt;/P&gt;
                  &lt;P&gt; Let's start with Infinity. In the old days there were two 
                    pretty easy ways to get a program to die--divide by zero, 
                    or overflow (multiply two very large numbers together, for 
                    example). Each of these is a limitation of the "range" 
                    of possible result values. If you cannot represent a value 
                    of "Infinity", what result value should be given 
                    to a divide by zero?! Well, there were two schools of thought. 
                    Some wanted their program to die (division by zero is ALWAYS 
                    a mistake that was not checked for in MY algorithm). &lt;/P&gt;
                  &lt;P&gt; Others wanted to "keep on trucking" (you simply 
                    cannot just die after 3000 hours of running MY program!) with 
                    some artificial, but specified, value as the result. While 
                   
 the latter group wanted "non-stop" computing, they 
                    also wanted some indication that their final results might 
                    be tainted. They successfully lobbied for special values: 
                    Infinity, -Infinity and NaN, and a "standard" treatment 
                    of these values in subsequent arithmetic computations and 
                    comparisons. So, for example if the user:&lt;/P&gt;
                  &lt;TABLE width="100%" border="1"&gt;
                    &lt;TBODY&gt;&lt;TR&gt; 
                      &lt;TH width="100" class="tableDataHeader"&gt;Computes&lt;/TH&gt;
                      &lt;TH class="tableDataHeader"&gt;The 
                        result is&lt;/TH&gt;
                    &lt;/TR&gt;
                    &lt;TR&gt; 
                      &lt;TD class="tableData"&gt;2.0 
                        * 4.0&lt;/TD&gt;
                      &lt;TD class="tableData"&gt;8.0 
                        (usually, "quality of implementation" issue!)&lt;/TD&gt;
                    &lt;/TR&gt;
                    &lt;TR&gt; 
                      &lt;TD class="tableData"&gt;10.0 
                        / 0.0&lt;/TD&gt;
                      &lt;TD class="tableData"&gt;Infinity&lt;/TD&gt;
                    &lt;/TR&gt;
                    &lt;TR&gt; 
                      &lt;TD class="tableData"&gt;-5.0 
                        / 0.0&lt;/TD&gt;
                      &lt;TD class="tableData"&gt;-Infinity&lt;/TD&gt;
                    &lt;/TR&gt;
                    &lt;TR&gt; 
                      &lt;TD class="tableData"&gt;0.0 
                        / 0.0&lt;/TD&gt;
                      &lt;TD class="tableData"&gt;NaN 
                        (division by zero does NOT always give Infinity!)&lt;/TD&gt;
                    &lt;/TR&gt;
                    &lt;TR&gt; 
                      &lt;TD class="tableData"&gt;0.0 
                        ==-0.0&lt;/TD&gt;
                      &lt;TD class="tableData"&gt;.TRUE.&lt;/TD&gt;
                    &lt;/TR&gt;
                    &lt;TR&gt; 
                      &lt;TD class="tableData"&gt;Infinity 
                        * 0.0&lt;/TD&gt;
                      &lt;TD class="tableData"&gt;NaN 
                        (can you just imagine the debate over this one!)&lt;/TD&gt;
                    &lt;/TR&gt;
                    &lt;TR&gt; 
                      &lt;TD class="tableData"&gt;Infinity 
                        - Infinity&lt;/TD&gt;
                      &lt;TD class="tableData"&gt;NaN&lt;/TD&gt;
                    &lt;/TR&gt;
                    &lt;TR&gt; 
                      &lt;TD class="tableData"&gt;Infinity 
                        / Infinity&lt;/TD&gt;
                      &lt;TD class="tableData"&gt;NaN&lt;/TD&gt;
                    &lt;/TR&gt;
                    &lt;TR&gt; 
                      &lt;TD class="tableData"&gt;1.0 
                        / Infinity&lt;/TD&gt;
                      &lt;TD class="tableData"&gt;0.0&lt;/TD&gt;
                    &lt;/TR&gt;
                    &lt;TR&gt; 
                      &lt;TD class="tableData"&gt;-1.0 
                        / Infinity&lt;/TD&gt;
                      &lt;TD class="tableData"&gt;-0.0&lt;/TD&gt;
                    &lt;/TR&gt;
                    &lt;TR&gt; 
                      &lt;TD class="tableData"&gt;NaN 
                        * 3.0&lt;/TD&gt;
                      &lt;TD class="tableData"&gt;NaN&lt;/TD&gt;
                    &lt;/TR&gt;
                    &lt;TR&gt; 
                      &lt;TD class="tableData"&gt;NaN 
                        == NaN&lt;/TD&gt;
                      &lt;TD class="tableData"&gt;.FALSE. 
                        (optimizing compilers love this one...)&lt;/TD&gt;
                    &lt;/TR&gt;
                    &lt;TR&gt; 
                      &lt;TD class="tableData"&gt;NaN 
                 
       /=NaN&lt;/TD&gt;
                      &lt;TD class="tableData"&gt;.TRUE. 
                        (... and this one, too!)&lt;/TD&gt;
                    &lt;/TR&gt;
                  &lt;/TBODY&gt;&lt;/TABLE&gt;
                  &lt;P&gt; This standard also defined SQRT, but NOT any of the intrinsic 
                    functions like SIN, COS, TAN, SUM, PRODUCT, etc. The result 
                    of all of this was that many programs could just keep running, 
                    producing +-Infinity and NaN as they went, and not particularly 
                    worry about dividing by zero or the aftermath (pun intended!). 
                    And these values would tend to propagate themselves whenever 
                    they are used. You CAN "get rid of" an Infinity 
                    if all you do is to use it as a divisor (producing zero), 
                    but NaN is really hard to "get rid of". In fact, 
                    about the only way to constructively eliminate a NaN is to 
                    do something like: &lt;/P&gt;
                  &lt;PRE&gt;
IF(ISNAN(X)) THEN
! Replace X with something else
! or use better/other algorithm, etc.
ENDIF&lt;/PRE&gt;
                  &lt;P&gt;Ah, but much was left undefined. For example, what result 
                    would you like to produce for SIN(X) where X is Infinity? 
                    As you know, SIN normally has a range between -1. and 1., 
                    so should we return Infinity? Would NaN be better? How about 
                    a more traditional "DOMAIN error" for the intrinsic 
                    function? And if intrinsic functions are not enough trouble, 
                    how about comparisons? For example while (Infinity .GT. 17.0) 
                    is .TRUE. (defined that way), it might not be so obvious that 
                    (NaN .EQ. NaN) is .FALSE. or that (Infinity .GT. NaN) is .FALSE. 
                    There is a whole new algebra, but only defined for primitive 
                    arithmetic and comparison operations (this IS a hardware standard, 
                    after all!). Don't even think about COMPLEX numbers such as: 
                    (-Infinity, NaN)... &lt;/P&gt;
                  &lt;P&gt; In order to represent Infinity and NaN, the IEEE standard 
                    chose to make all reals having the largest exponent value 
                    (all 1's) "reserved". If the exponent is all 1's 
                    and the fraction is zero, we have an Infinity. The sign bit 
                    is relevant, so there is one value for +Infinity (7F800000 
                    in hex) and one value for -Infinity (FF800000). If the exponent 
                    is all 1's and the fraction is ANY non-zero value, then this 
                    is a NaN. Notice that there are many different values for 
                    NaN. There are even two different kinds of NaN, Quiet and 
                    Signaling, but this distinction is so esoteric for Fortran 
                    that if you understand the difference and make use of it in 
                    your Fortran programs, then you can send your resume to us... 
                  &lt;/P&gt;
&lt;P&gt;Continued in next post&lt;/P&gt;</description>
    <pubDate>Thu, 08 Dec 2005 02:14:27 GMT</pubDate>
    <dc:creator>Steven_L_Intel1</dc:creator>
    <dc:date>2005-12-08T02:14:27Z</dc:date>
    <item>
      <title>Visual Fortran Newsletter Articles</title>
      <link>https://community.intel.com/t5/Intel-Fortran-Compiler/Visual-Fortran-Newsletter-Articles/m-p/844697#M62666</link>
      <description>&lt;P&gt;In this thread I will post copies of selected articles from the (DEC/Compaq) Visual Fortran Newsletter, including the Doctor Fortran series. I'm not going to try to go back and edit them to reflect Intel Visual Fortran, though I may add comments.&lt;BR /&gt;
	&lt;BR /&gt;
	Enjoy.&lt;/P&gt;

&lt;P&gt;Links to specific posts:&lt;/P&gt;

&lt;UL&gt;
	&lt;LI&gt;&lt;A href="http://software.intel.com/en-us/forums/topic/275071#comment-1548430"&gt;Everything you've always wanted to know about VB Arrays of Strings* (*but were afraid to ask)&lt;/A&gt;&lt;/LI&gt;
	&lt;LI&gt;&lt;A href="http://software.intel.com/en-us/forums/topic/275071#comment-1548431"&gt;Ask Dr. Fortran&lt;/A&gt;&lt;/LI&gt;
	&lt;LI&gt;&lt;A href="http://software.intel.com/en-us/forums/topic/275071#comment-1548432"&gt;Hey! Who are you calling "obsolescent"?&lt;/A&gt;&lt;/LI&gt;
	&lt;LI&gt;&lt;A href="http://software.intel.com/en-us/forums/topic/275071#comment-1548433"&gt;Dr. Fortran says "Better SAVE than sorry!"&lt;/A&gt;&lt;/LI&gt;
	&lt;LI&gt;&lt;A href="http://software.intel.com/en-us/forums/topic/275071#comment-1548434"&gt;Dr. Fortran and "The Dog That Did Not Bark"&lt;/A&gt;&lt;/LI&gt;
	&lt;LI&gt;&lt;A href="http://software.intel.com/en-us/forums/topic/275071#comment-1548435"&gt;Doctor Fortran in "To .EQV. or to .NEQV., that is the question", or "It's only LOGICAL"&lt;/A&gt;&lt;/LI&gt;
	&lt;LI&gt;&lt;A href="http://software.intel.com/en-us/forums/topic/275071#comment-1548436"&gt;Don't Touch Me There - What error 157 (Access Violation) is trying to tell you&lt;/A&gt;&lt;/LI&gt;
	&lt;LI&gt;&lt;A href="http://software.intel.com/en-us/forums/topic/275071#comment-1548437"&gt;Doctor Fortran and the Virtues of Omission&lt;/A&gt;&lt;/LI&gt;
	&lt;LI&gt;&lt;A href="http://software.intel.com/en-us/forums/topic/275071#comment-1548438"&gt;The Perils of Real Numbers (Part 1)&lt;/A&gt;&lt;/LI&gt;
	&lt;LI&gt;&lt;A href="http://software.intel.com/en-us/forums/topic/275071#comment-1548441"&gt;The Perils of Real Numbers (Part 2)&lt;/A&gt;&lt;/LI&gt;
	&lt;LI&gt;&lt;A href="http://software.intel.com/en-us/forums/topic/275071#comment-1548445"&gt;The Perils of Real Numbers (Part 3)&lt;/A&gt;&lt;/LI&gt;
	&lt;LI&gt;&lt;A href="http://software.intel.com/en-us/forums/topic/275071#comment-1548439"&gt;Win32 Corner - ShellExecute&lt;/A&gt;&lt;/LI&gt;
	&lt;LI&gt;&lt;A href="http://software.intel.com/en-us/forums/topic/275071#comment-1548440"&gt;Doctor Fortran Gets Explicit!&lt;/A&gt;&lt;/LI&gt;
	&lt;LI&gt;&lt;A href="http://software.intel.com/en-us/forums/topic/275071#comment-1548443"&gt;Win32 Corner - CreateProcess&lt;/A&gt;&lt;/LI&gt;
	&lt;LI&gt;&lt;A href="http://software.intel.com/en-us/forums/topic/275071#comment-1548444"&gt;Doctor Fortran - Something Old, Something New: Taking a new look at FORMAT&lt;/A&gt;&lt;/LI&gt;
	&lt;LI&gt;&lt;A href="http://software.intel.com/en-us/forums/topic/275071#comment-1548446"&gt;Calling Visual Fortran from Java JNI&lt;/A&gt;&lt;/LI&gt;
	&lt;LI&gt;&lt;A href="http://software.intel.com/en-us/forums/topic/275071#comment-1548447"&gt;Doctor Fortran - Don't Blow Your Stack!&lt;/A&gt;&lt;/LI&gt;
	&lt;LI&gt;&lt;A href="http://software.intel.com/en-us/forums/topic/275071#comment-1548449"&gt;Passing Arrays in Fortran 90&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;

&lt;P&gt;&lt;A href="http://www.intel.com/software/drfortran"&gt;Newer Doctor Fortran posts&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 07 Dec 2005 23:42:58 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Fortran-Compiler/Visual-Fortran-Newsletter-Articles/m-p/844697#M62666</guid>
      <dc:creator>Steven_L_Intel1</dc:creator>
      <dc:date>2005-12-07T23:42:58Z</dc:date>
    </item>
    <item>
      <title>"Everything you've always wanted to know about VB Arrays of Str</title>
      <link>https://community.intel.com/t5/Intel-Fortran-Compiler/Visual-Fortran-Newsletter-Articles/m-p/844698#M62667</link>
      <description>&lt;P&gt;May 1998&lt;/P&gt;
&lt;P&gt;"Everything you've always wanted to know about VB Arrays of Strings*&lt;BR /&gt;

    (*but were afraid to ask)"&lt;BR /&gt;

Lorri Menard&lt;/P&gt;

&lt;P&gt;As promised in the last newsletter, here is an article on "How to pass
arrays of strings from Visual Basic to Visual Fortran".  Actually, it
should be called "How DVF can receive arrays of strings from VB", because
Visual Basic doesn't need to do anything special to pass the arrays to
Fortran.&lt;/P&gt;

&lt;P&gt;The structure that VB uses to pass arrays of strings is called a "Safe
Array". These are often used in COM interfaces, and contain information
about the dimensions and bounds of the arrays within them.&lt;/P&gt;

&lt;P&gt;Appended is an example Fortran subroutine that receives a one-dimensional
SafeArray of strings from Visual Basic and writes the contents of each
string out to a data file.   It then modifies the strings, within the
SafeArray structure, and passes them back to VB.  I've noted the areas of
interest with the keystring "!**", and included a long and involved
explanation of why you need to do it that way.  (Of course, I reserve the
right to claim "Because I said so!")&lt;/P&gt;
 
&lt;P&gt;The call from the Basic routine is as simple as this:&lt;/P&gt;

[plain]
Dim MyArray(2) as String
MyArray(0) = "First element"
MyArray(1) = "Second element"
MyArray(2) = "Third element"
Call ForCall(MyArray)
[/plain]

&lt;P&gt;Now, let's get into the Fortran program.&lt;/P&gt;

[fortran]! ARRAYS.F90
! This subroutine takes as input an array of strings from Visual Basic,
!  and writes each string out to a datafile.
! It also writes various pieces of information about the array to that
!  file, for illustrative purposes.
! 
subroutine ForCall (VBArray)
	!dec$ attributes alias : "ForCall" :: ForCall
	!dec$ attributes dllexport :: ForCall
	!dec$ attributes stdcall :: ForCall
!** Declare the array of strings (SafeArray) as being passed by REFERENCE.
!**  This must be explicit.
	!dec$ attributes reference :: VBArray
!** The following module declares the interfaces to SafeArrayxxx
	use dfcom

	implicit none

!** Declare the SafeArray as a pointer.  Use a generic 
!**   integer as something to point to, because the POINTER statement
!**   requires it.  
!** When this is declared as a pointer it will automatically expand
!**  to fit the size of a pointer for the particular platform.  Today
!**  that is 32 bits - in the future, that may expand.

	pointer (VBArray,SADummy)  !Pointer to a SafeArray structure
	integer SAdummy

!** What is returned by SafeArrayGetElem is a BSTR.  The structure of
!**  a BSTR is such that the length of the BSTR is returned in the word
!**  preceding the pointer, and the string itself is pointed to by
!**  the pointer.  When using COM, BSTRs are coded in Unicode.  Through
!**  experimentation with Visual Basic V5.0 I've found that it passes
!**  BSTRs coded in 8-bit ASCII.  
!** Please note: This may not be true with future releases of VB!
!**  The good news is that it allows us to take some shortcuts for now.
!** 
!**  Set up the appropriate structures.  Declare a character string
!**  that is "long enough".  It doesn't actually take up any space
!**  in your program; it is used as a template to describe the memory
!**  pointed to by the pointer StringPtr

	character*2000 mystring
	pointer (StringPtr, mystring)

	integer i, result, lbound, ubound, length

	! Create the data file
	open (2, file="test.out", status="unknown")

	write (2, *) "Details of the array passed by VB"
	! Get the lower array bound
	result = SafeArrayGetLBound(VBArray, 1, lbound)
	write (2, *) "GetLBound gives ", lbound
	! Get the upper array bound
	result = SafeArrayGetUBound(VBArray, 1, ubound )
	write (2, *) "GetUBound gives ", ubound

!** In this next loop, get each element of the array.  This returns a
!**  pointer to a copy of the string, which can then be referenced through
!**  mystring.  The length of the string is retrieved by the routine
!**  SysStringByteLen.
!** This copy must be freed when we're done with it.

	write (2, *) "Strings from the array:"
	do i = lbound, ubound
		result = SafeArrayGetElement(VBArray, i, LOC(StringPtr))
		length = SysStringByteLen(StringPtr)
		write (2, *) mystring(1:length)
		call SysFreeString(StringPtr)
	end do

	!Done with the data file.
	close (2)

!** This next loop writes a string back into each element of the array.
!**  Through experimentation I've discovered that you MUST write back as
!**  many characters as were there before: no more, no less.  This loop
!**  gets the length of the element, and writes back that many characters.
!** Once again, the SafeArrayGetElement makes a copy, which must be
!**  freed.  
!** SafeArrayPutElement also makes a copy, which is then passed back
!**  to Visual Basic.  Unfortunately, the memory occupied by the original
!**  strings passed in is still allocated, and no longer pointed to.

	!Let's try writing back into VB's array
	do i = lbound, ubound
		result = SafeArrayGetElement(VBArray, i, LOC(StringPtr))
		length = SysStringByteLen(StringPtr)
		mystring(1:length) = "Element#" // char(i+1+48)
		mystring(length+1:length+1) = char(0)
		result = SafeArrayPutElement(VBArray, i, LOC(mystring))
		call SysFreeString(StringPtr)
	end do

	return
	end
[/fortran]</description>
      <pubDate>Wed, 07 Dec 2005 23:50:00 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Fortran-Compiler/Visual-Fortran-Newsletter-Articles/m-p/844698#M62667</guid>
      <dc:creator>Steven_L_Intel1</dc:creator>
      <dc:date>2005-12-07T23:50:00Z</dc:date>
    </item>
    <item>
      <title>Ask Dr. Fortran</title>
      <link>https://community.intel.com/t5/Intel-Fortran-Compiler/Visual-Fortran-Newsletter-Articles/m-p/844699#M62668</link>
      <description>&lt;P&gt;May 1998&lt;/P&gt;

&lt;P&gt;"Ask Dr. Fortran"&lt;BR /&gt;

Steve Lionel&lt;/P&gt;

&lt;P&gt;Dear Dr. Fortran,&lt;/P&gt;
     
&lt;P&gt;I know this program who seems to be OK, but he is a little different from
all the other programs. (Just between you and me, he is a legacy program. 
Don't let that get out.  It would not be politically correct.)&lt;/P&gt;
     
&lt;P&gt;He started out life written for the IBM 1130 Disk Monitor System with 8k of
core storage.  He was written in 1130 FORTRAN.  The original documentation
gives direction as to which switches on the computer must be flipped to 
invoke certain options.  But he is still alive and works well.  We still add
things to him.  He lives on PCs now.&lt;/P&gt;
     
&lt;P&gt;We really don't view him as belonging to us.  His original programmers have
retired and some have died.  So in that respect he is doing better than his
creators.  We are like park rangers taking care of some national treasure
that is to be passed on to our successors.&lt;/P&gt;
     
&lt;P&gt;But lets give this a go.  I need some help understanding what really goes on
in this guys head.  There are many cases of the following coding:&lt;/P&gt;
&lt;PRE&gt;     
        subroutine xyz(n,array)
        integer n
        real array(1)      &amp;lt;-- Note the size is only 1
           . . .           &amp;lt;-- code in here loops from 1 to n.
        return
        end
&lt;/PRE&gt;    
&lt;P&gt;My questions are:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Why does this work?  It seems an out of bounds exception should be 
   generated for array since its size is only 1.&lt;/LI&gt;
     
&lt;LI&gt;In the main program the arrays are explicit shape.  What type is array 
     in subroutine xyz?&lt;/LI&gt;
     
&lt;LI&gt;Is array(1) standard FORTRAN, or is this something that most compilers 
     just allow?&lt;/LI&gt;&lt;/OL&gt;
     
&lt;P&gt;Sincerely,&lt;/P&gt;
     
&lt;P&gt;Robert Magliola&lt;BR /&gt;

De Leuw, Cather and Co.&lt;/P&gt;

&lt;P&gt;Dear Mr. Magliola,&lt;/P&gt;

&lt;P&gt;When this charming program was written, in the days of keypunches and
storage drums, FORTRAN IV (FORTRAN-66) was the current standard.  While
FORTRAN IV did have the "adjustable array" feature, (which could have been
used in the above example by using "array(n)" instead of "array(1)"), it did
not have the "assumed-size array" feature (where the rightmost upper bound
is specified as "*") that was to be introduced in FORTRAN 77.  Therefore,
programmers who wanted to write subroutines which would accept an array of
unknown total size would use a last dimension of 1.  This worked because the
last upper bound is not needed to calculate the position of an element in
a Fortran array, and compilers of the time didn't have array bounds checking
(or if they did it could be disabled).&lt;/P&gt;

&lt;P&gt;Now fast-forward to 1978 when the FORTRAN 77 standard was adopted.  It included
a new "assumed-size" array feature (which had already shown up as an extension
in many vendors' compilers).  So now there was a standard-conforming way to
say "I don't know what the upper bound is", yet there were still thousands of
existing programs that used the old (1) convention and more compilers
supported bounds-checking, even at compile-time (VAX FORTRAN did this, for
example.)  These old programs would suddenly start getting errors, which
was not desirable - the Fortran tradition is provide as much upward
compatibility as possible.  What to do?&lt;/P&gt;

&lt;P&gt;The solution was to have compilers treat a last upper bound of 1 as a special
case that was equivalent to *, disabling bounds checking (which answers your
first question).  The a
rray in the above example has a single dimension with 
lower bound 1 and an implicit upper bound of the total number of elements in
the array that was passed, though most compilers don't pass that information
and just treat the upper bound as infinite (questions two and three.)  It is
valid to have a multi-dimension assumed-size array, but only the rightmost
(last) dimension can have an upper bound of * (or 1 treated as *).  If you
have a multi-dimension array with upper bounds other than the last of 1, then 1
is what you get.&lt;/P&gt;

&lt;P&gt;I hope you have enjoyed this trip back into the history of the Fortran
language.&lt;/P&gt;</description>
      <pubDate>Wed, 07 Dec 2005 23:55:27 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Fortran-Compiler/Visual-Fortran-Newsletter-Articles/m-p/844699#M62668</guid>
      <dc:creator>Steven_L_Intel1</dc:creator>
      <dc:date>2005-12-07T23:55:27Z</dc:date>
    </item>
    <item>
      <title>Dr. Fortran - Obsolescent Features</title>
      <link>https://community.intel.com/t5/Intel-Fortran-Compiler/Visual-Fortran-Newsletter-Articles/m-p/844700#M62669</link>
      <description>&lt;P&gt;October 1998&lt;/P&gt;
&lt;P&gt;Ask Dr. Fortran&lt;BR /&gt;

Hey! Who are you calling "obsolescent"?&lt;BR /&gt;

Steve Lionel, DVF Development Team&lt;/P&gt;

&lt;P&gt;Dr. Fortran didn't receive any appropriate questions for his column this
time, so he's going to take on a topic that is sure to raise a ruckus each
time it is brought up in the comp.lang.fortran newsgroup: Obsolescent and
Deleted Features.&lt;/P&gt;

&lt;P&gt;Fortran (or FORTRAN) has had a long history of general upward compatibility
- Fortran 77 included almost all of Fortran 66, and Fortran 90 included all
of Fortran 77.  But Fortran 90 formally introduced the concept of "language
evolution" with the goal of removing from the language certain features that
had more modern counterparts in the new language. &lt;/P&gt;

&lt;P&gt;The Fortran 90 standard added two lists of features, "Deleted" and
"Obsolescent".  The "Deleted" list, features no longer in the language, was
empty in Fortran 90.  The "Obsolescent" list contained nine features of 
Fortran 77 which, to quote the standard, "are redundant and for which better
methods are available in Fortran 77."  Furthermore, the F90 standard said:&lt;/P&gt;&lt;P&gt;

&lt;/P&gt;&lt;BLOCKQUOTE&gt;&lt;P&gt;If the use of these features has become insignificant in Fortran programs,
  it is recommended that future Fortran standards committees consider
  deleting them from the next revision.&lt;/P&gt;

  &lt;P&gt;It is recommended that the next Fortran standards committee consider for
  deletion only those language features that appear in the list of 
  obsolescent features.&lt;/P&gt;

  &lt;P&gt;It is recommended that processors supporting the Fortran language continue
  to support these features as long as they continue to be widely used in
  Fortran programs.&lt;/P&gt;&lt;/BLOCKQUOTE&gt;

&lt;P&gt;Proponents of "cleaning up" the language argued that it would make compiler
implementors' jobs easier.  The compiler vendors disagreed; most said that
they would not remove support for any features since they knew that users
continue to compile old programs.  Furthermore, deleting a feature from the 
language means that there is no official description of how that feature, if
still supported, interacts with other language features.  (The Fortran 77
standard included an appendix describing how Hollerith constants, a 
FORTRAN IV feature not included in Fortran 77, should work if a compiler 
chose to support them.) For the record, DIGITAL will not remove support for 
any "deleted" language features from its Fortran compilers.&lt;/P&gt;

&lt;P&gt;In Fortran 90, the list of "obsolescent" features was as follows:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Arithmetic IF
&lt;/LI&gt;&lt;LI&gt;Real and double precision DO control variables and DO loop control
       expressions
&lt;/LI&gt;&lt;LI&gt;Shared DO termination and termination on a statement other than END
       DO or CONTINUE
       statement.
&lt;/LI&gt;&lt;LI&gt;Branching to an END IF statement from outside its IF block
&lt;/LI&gt;&lt;LI&gt;Alternate return
&lt;/LI&gt;&lt;LI&gt;PAUSE statement
&lt;/LI&gt;&lt;LI&gt;ASSIGN statement and assigned GO TO statements
&lt;/LI&gt;&lt;LI&gt;Assigned FORMAT specifiers
&lt;/LI&gt;&lt;LI&gt;cH edit descriptor&lt;/LI&gt;&lt;/OL&gt;

&lt;P&gt;Descriptions of obsolescent features in the standard appeared in a small font
and compilers were to provide the ability to issue diagnostics for the use
of obsolete features.&lt;/P&gt;

&lt;P&gt;Now we come to Fortran 95.  Keep in mind that the Fortran 90 standard did
not say that the next standard HAD to delete any of the
previously-designated "obsolescent" features, but that's exactly what the
standards committee did.  Six of the nine "obsolescent" features (numbers
2, 4, 6, 7, 8 and 9
) above were "deleted". Poof!  Gone!  And guess what -
that meant that a valid Fortran 77 program was no longer a valid Fortran 95
program!  But never fear: DVF (and indeed most vendors' compilers) will
continue to support the deleted features (with optional diagnostics
informing you of the fact, of course.)&lt;/P&gt;

&lt;P&gt;The Fortran 95 list of obsolescent features includes the remaining items of
the above list from Fortran 90 (1, 3 and 5), as well as several new 
additions. Are you sitting down?  Here's the new list:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Arithmetic IF
&lt;/LI&gt;&lt;LI&gt;Shared DO termination and termination on a statement other than END
       DO or CONTINUE
&lt;/LI&gt;&lt;LI&gt;Alternate return
&lt;/LI&gt;&lt;LI&gt;Computed GO TO statement (use CASE)
&lt;/LI&gt;&lt;LI&gt;Statement functions (use CONTAINed procedures)
&lt;/LI&gt;&lt;LI&gt;DATA statements amongst executable statements (betcha didn't know
       they could go there!)
&lt;/LI&gt;&lt;LI&gt;Assumed length character functions (this means CHARACTER*(*)
       FUNCTIONs)
&lt;/LI&gt;&lt;LI&gt;Fixed form source (!!!!)
&lt;/LI&gt;&lt;LI&gt;CHARACTER* form of CHARACTER declaration (use CHARACTER([LEN=]))
&lt;/LI&gt;&lt;/OL&gt;
&lt;P&gt;Needless to say, the inclusion of fixed-form source on this list has raised
a LOT of eyebrows...  Assumed-length CHARACTER functions (and the CHARACTER*
form of declaration) are deemed to be an "irregularity" in the language,
which they are, and there are alternatives available, but Dr. Fortran
doesn't see these disappearing from users' code anytime soon.&lt;/P&gt;

&lt;P&gt;So does this mean that some of these features will be deleted in the next
standard, currently called "Fortran 2000"? [Fortran 2003 - ed.] At present, the answer is "no".
The standards committee has agreed to NOT move any features from the
"obsolescent" list to the "deleted" list for F2K, and furthermore, is not
proposing any additions to the "obsolescent" list.  So it would appear that,
for now, anyway, the "concept of language evolution" excludes extinction,
and that should make Fortran programmers around the world breathe easier.&lt;/P&gt;</description>
      <pubDate>Thu, 08 Dec 2005 00:03:59 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Fortran-Compiler/Visual-Fortran-Newsletter-Articles/m-p/844700#M62669</guid>
      <dc:creator>Steven_L_Intel1</dc:creator>
      <dc:date>2005-12-08T00:03:59Z</dc:date>
    </item>
    <item>
      <title>Re: Visual Fortran Newsletter Articles</title>
      <link>https://community.intel.com/t5/Intel-Fortran-Compiler/Visual-Fortran-Newsletter-Articles/m-p/844701#M62670</link>
      <description>&lt;DIV style="margin: 0px; height: auto;"&gt;&lt;/DIV&gt;
&lt;P&gt;July(?) 1999&lt;/P&gt;
&lt;P&gt;Dr. Fortran says "Better SAVE than sorry!"&lt;BR /&gt; Steve Lionel, Compaq Fortran Engineering&lt;/P&gt;
&lt;P&gt;In this issue, Dr. Fortran takes on another less-understood feature of the Fortran language, the SAVE attribute.&lt;/P&gt;
&lt;P&gt;Back in the "good old days" of Fortran programming, when lowercase letters hadn't been invented yet and we strung our core memory wires by hand, programmers knew that local variables lived in fixed memory locations and, of course, took advantage of that, writing code such as this:&lt;/P&gt;
&lt;PRE&gt;      SUBROUTINE SUB
      INTEGER I
      I = I + 1
      WRITE (6,*) 'New I=',I
      END
&lt;/PRE&gt;
&lt;P&gt;The idea was that the value of the local variable I was preserved between calls to routine SUB, so that subsequent calls would get successive values of I.  (Many of these same programmers assumed that variables were zero-initialized as well.)  However, the Fortran language didn't make such promises and, with the advent of improved optimization and "split lifetime analysis" which could make variables live in registers or on the stack, programs which made such assumptions could break.&lt;/P&gt;
&lt;P&gt;To accommodate the useful notion of a local variable whose definition status is preserved across routine calls, Fortran 77 added the SAVE statement.  If a local variable (not a dummy argument) was named in a SAVE statement, its value at the point of the RETURN or END statement was preserved to the next call to that subroutine or function.  Named COMMON blocks could also be SAVEd, but didn't need to be if there was always a routine active in the call tree which used that COMMON.  Blank COMMON was implicitly SAVEd.  An interesting tidbit is that in Fortran 77, a DATA-initialized variable's value was preserved without needing SAVE as long as you didn't redefine the variable.  Fortran 90 removed that last clause for local variables, so that an "initially defined" local variable is implicitly SAVEd, but the catch about redefinition still applies to variables in named COMMON blocks.&lt;/P&gt;
&lt;P&gt;One common misconception is that SAVE implies static (fixed address) allocation for a variable.  This is not so - in fact, if the compiler can determine that a SAVEd variable is always defined before use, then it could decide to make that variable live in a non-static (register or stack) location.  The Fortran standard has no mechanism for saying "static and I mean it" - even the Compaq Fortran STATIC extension doesn't do this.  Right now, the best way to ensure that a variable is allocated statically is to put it in COMMON and give it the VOLATILE attribute (VOLATILE is an extension [but is standard in F2003 - ed.]).&lt;/P&gt;
&lt;P&gt;Fortran 90 added a new twist to this - ALLOCATABLE arrays.  Fortran 90 implied, and Fortran 95 makes clear, that local ALLOCATABLE variables get automatically deallocated and become undefined when the routine in which they are declared is returned from.  This has been a big shock to some programmers who figured that the values would stay around.  If you want the array to remain there, use SAVE.&lt;/P&gt;
&lt;P&gt;Given that many programs assumed SAVE semantics for variables, most vendors, including DIGITAL, had their Fortran 77 compilers give implicit SAVE semantics to variables which were used before being defined. (Note that this doesn't apply to ALLOCATABLE variables.) [Intel Fortran does NOT give SAVE semantics by default.] So why use SAVE?  First, it is a lways a good idea to tell the compiler what you want, rather than making assumptions based on a current implementation.  Compilers keep getting smarter and what "works" today might not work next year. Proper use of SAVE can also aid with error reporting - some compilers will suppress "use before defined" warnings for variables with an explicit SAVE attribute.  It's also good to let the human who reads your code know that you are assuming the variable's value is preserved.  That's why Dr. Fortran says, "Better SAVE than sorry!"&lt;/P&gt;</description>
      <pubDate>Thu, 08 Dec 2005 00:10:12 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Fortran-Compiler/Visual-Fortran-Newsletter-Articles/m-p/844701#M62670</guid>
      <dc:creator>Steven_L_Intel1</dc:creator>
      <dc:date>2005-12-08T00:10:12Z</dc:date>
    </item>
    <item>
      <title>Dr. Fortran and "The Dog That Did Not Bark"</title>
      <link>https://community.intel.com/t5/Intel-Fortran-Compiler/Visual-Fortran-Newsletter-Articles/m-p/844702#M62671</link>
      <description>&lt;P&gt;October 1999&lt;/P&gt;
                  &lt;H3&gt;&lt;I&gt;Dr. Fortran and "The Dog That Did Not Bark"&lt;/I&gt;&lt;/H3&gt;
                  &lt;H4&gt;By Steve Lionel&lt;/H4&gt;
In past issues of the newsletter, Dr. Fortran has discussed an assortment of things, sometimes obscure, that the Fortran standard says. In this issue, he's going to take a page from Sherlock Holmes and talk about things that the standard doesn't say, and how they can bite you as well. 

Let's start with a simple observation that the standard describes a "standard-conforming program". That is, it establishes the rules to which a program must conform in order to produce results as specified by the standard. If your program is not standard-conforming, then all bets are off - the processor (compiler and run-time environment) can do anything (a common example used in comp.lang.fortran is "Start World War III", though the Doctor is not aware of any implementations which would do this - he would consider this a "quality of implementation issue"). 

You've probably written many non-conforming programs without realizing it. Got INTEGER*4 in your programs? Non-standard. Use LOGICAL variables in arithmetic expressions (or use logical operators such as .AND. on integers)? Non-standard. What these do is implementation-dependent. If a compiler supports these and similar uses, it does so as extensions to the standard and is generally required to have the ability to detect the non-conformance at compile-time. If your program uses such extensions, it is non-portable and may execute differently on different platforms or with different compilers. 

However, there is another class of non-conformity that, in general, can't be detected at compile time and which can cause big headaches for programmers who make unwarranted assumptions. Let's start with one of the Doctor's favorites - order of evaluation of LOGICAL expressions. 

Many programmers write something like this:

IF ((I .NE. 0) .AND. (ARRAY(I) .NE. -1) THEN

and expect that if I is zero, then the reference to ARRAY(I) won't happen. The program may work on one platform, but get array bounds errors when ported to another. However, the standard allows the operands of a logical operator to be evaluated in any order, and at any level of completeness, as long as the result is algebraically correct. For logical expressions, Fortran does NOT have strict left-to-right ordering nor does it have short-circuit evaluation. The standard-conforming way of writing this is:


IF (I .NE. 0) THEN
  IF (ARRAY(I) .NE. -1) THEN

Here's another place where the standard's silence can trap the unwary. What do you see when you execute the following statement?

WRITE (*,'(F3.0)') 2.5

Many Fortran programmers expect "3.". But try this in Visual Fortran, as well as in most other PC and UNIX workstation Fortran implementations and you'll get "2."! Why? Well, the Fortran standard says that the value is to be "rounded", but doesn't define what that means! On systems which implement IEEE floating arithmetic, the IEEE default rounding rules are used and they specify that if the rounding digit is exactly half-way between two representable results, you round so that the low-order digit is even. If you're a VAX user, you'll get "3." because VAX rounding uses the "5-9 rounds up" rule, and an OpenVMS Alpha user can see it either way, depending on whether or not IEEE float was selected! The Doctor notes that the Fortran standards committee is working on a proposal for a future standard that would allow the programmer to specify the rounding method, but for now, the standard is silent and you get whatever the compiler writers think is right. 

Pop quiz time - in a CHARACTER(LEN=n) declaration, what is the lowest value of n that a compiler is required to support, according to the standard? Is it A) 1? B) 11? C) 255? D) 1000? The standard doesn't explicitly say, but one can make a good argument for one of these. Go to the end of the column to see which one and why. The Doctor's point is that there are many compiler limits which the standard does not specify (including things such as the number of nested parentheses in an expression, number of actual arguments supported, etc.). While most implementations have reasonable limits for such things, the Doctor has seen programs which exceed the limits of some implementations (for example, using hundreds of actual arguments) and become non-portable. Just because one compiler supports something, that doesn't mean that all will! 

There are many other things the standard doesn't say that programmers often take for granted. For example, the standard doesn't even say that 1+1=2, or how accurate the SIN intrinsic must be. An implementation which grossly violates reasonable expectations here would probably be a commercial failure, but it wouldn't be violating the standard! 

In summary, writing standard-conforming and portable programs is not just a matter of throwi ng the "standards checking switch". You also need to be aware of things the standard doesn't say and to make sure that your application doesn't depend on implementation-dependent features and behaviors. The more platforms you port your application to, the more likely it is that you'll uncover such assumptions in your code.

Answer to Dr. Fortran's pop quiz: B) 11. Why? Because INQUIRE(FORM=) is supposed to assign the value "UNFORMATTED" to the specified variable (for unformatted connections) and that's 11 characters long, the longest of the set that INQUIRE returns. No other language rule implies a longer minimum length.</description>
      <pubDate>Thu, 08 Dec 2005 00:18:00 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Fortran-Compiler/Visual-Fortran-Newsletter-Articles/m-p/844702#M62671</guid>
      <dc:creator>Steven_L_Intel1</dc:creator>
      <dc:date>2005-12-08T00:18:00Z</dc:date>
    </item>
    <item>
      <title>It's only LOGICAL</title>
      <link>https://community.intel.com/t5/Intel-Fortran-Compiler/Visual-Fortran-Newsletter-Articles/m-p/844703#M62672</link>
      <description>&lt;P&gt;April 2000&lt;/P&gt;
&lt;H2&gt;Doctor Fortran in "To .EQV. or to .NEQV., that is the question", or "It's only LOGICAL"&lt;/H2&gt;
&lt;H3&gt;By Steve Lionel&lt;BR /&gt;Visual Fortran Engineering&lt;/H3&gt;
&lt;P&gt;Most Fortran programmers are familiar with the LOGICAL data type, or at least they think they are.... An object of type LOGICAL has one of only two values, true or false. The language also defines two LOGICAL constant literals .TRUE. and .FALSE., which have the values true and false, respectively. It seems so simple, doesn't it? Yes... and no.&lt;/P&gt;
&lt;P&gt;The trouble begins when you start wondering about just what the binary representation of a LOGICAL value is. An object of type "default LOGICAL kind" has the same size as a "default INTEGER kind", which in Visual Fortran (and most current Fortran implementations) is 32 bits. Since true/false could be encoded in just one bit, what do the other 31 do? Which bit pattern(s) represent true, and which represent false? And what bit patterns do .TRUE. and .FALSE. have? On all of these questions, the Fortran standard is silent. Indeed, according to the standard, you shouldn't be able to tell! How is this?&lt;/P&gt;
&lt;P&gt;According to the standard, LOGICAL is its own distinct data type unrelated to and not interchangeable with INTEGER. There is a restricted set of operators available for the LOGICAL type which are not defined for any other type: .AND., .OR., .NOT., .EQV. and .NEQV.. Furthermore, there is no implicit conversion defined between LOGICAL and any other type.&lt;/P&gt;
&lt;P&gt;"But wait," you cry! "I use .AND. and .OR. on integers all the time!" And so you do - but doing so is non-standard, though it's an almost universal extension in today's compilers, generally implemented as a "bitwise" operation on each bit of the value, and generally harmless. What you really should be using instead is the intrinsics designed for this purpose: IAND, IOR and IEOR.&lt;/P&gt;
&lt;P&gt;Not so harmless is another common extension of allowing implicit conversion between LOGICAL and numeric types. This is where you can start getting into trouble due to implementation dependencies on the binary representation of LOGICAL values. For example, if you have:&lt;/P&gt;
[fortran]
INTEGER I,J,K
I = J .LT. K
[/fortran]
&lt;P&gt;just what is the value of I? The answer is "it depends", and the result may even vary within a single implementation. Compaq Fortran traditionally (since the 1970s, at least) considers LOGICAL values with the least significant bit (LSB) one to be true, and values with the LSB zero to be false. All the other bits are ignored when testing for true/false. Many other Fortran compilers adopt the C definition of zero being false and non-zero being true. (Visual Fortran offers the /fpscomp:logicals switch to select the C method, since PowerStation used it as well.) Either way, the result of the expression &lt;B&gt;J.LT.K&lt;/B&gt; can be any value which would test correctly as true/false. For example, the value 1 or 999 would both test as true using Compaq Fortran. Just in case you were wondering, Compaq Fortran uses a binary value of -1 for the literal .TRUE. and 0 for the literal .FALSE..&lt;/P&gt;
&lt;P&gt;The real trouble with making assumptions about the internal value of LOGICALs is when you try testing them for "equality" against another logical expression. The way many Fortran programmers would naturally do this is as follows:&lt;/P&gt;
[fortran] IF (LOGVAL1 .EQ. LOGVAL2) ...[/fortran]
&lt;P&gt;but the results of this can vary depending on the internal representation. The Fortran language
 defines two operators exclusively for use on logical values, .EQV. ("equivalent to") and .NEQV. ("not equivalent to"). So the above test would be properly written as:&lt;/P&gt;
[fortran]IF (LOGVAL1 .EQV. LOGVAL2) ...[/fortran]
&lt;P&gt;In the Doctor's experience, not too many Fortran programmers use .EQV. and .NEQV. where they should, and get into trouble when porting software to other environments. Get in the habit of using the correct operators on LOGICAL values, and you'll avoid being snared by implementation differences.&lt;/P&gt;
&lt;P&gt;However, there is one aspect of these operators you need to be aware of... A customer recently sent us a program that contained the following statement:&lt;/P&gt;
[fortran]DO WHILE (K .LE. 2 .AND. FOUND .EQV. .FALSE.)[/fortran]
&lt;P&gt;The complaint was that the compiler "generated bad code." What the programmer didn't realize was that the operators .EQV. and .NEQV. have &lt;B&gt;lower&lt;/B&gt; precedence than any of the other predefined logical operators. This meant that the statement was treated as if it had been:&lt;/P&gt;
[fortran]DO WHILE (((K .LE. 2) .AND. FOUND) .EQV. .FALSE.)[/fortran]
&lt;P&gt;what was wanted instead was:&lt;/P&gt;
[fortran]DO WHILE ((K .LE. 2) .AND. (FOUND .EQV. .FALSE.))[/fortran]
&lt;P&gt;The Doctor's prescription here is to always use parentheses! That way you'll be sure that the compiler interprets the expression the way you meant it to! (And you therefore don't have to learn the operator precedence table you can find in chapter 4 of the Compaq Fortran Language Reference Manual!)&lt;/P&gt;</description>
      <pubDate>Thu, 08 Dec 2005 00:25:00 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Fortran-Compiler/Visual-Fortran-Newsletter-Articles/m-p/844703#M62672</guid>
      <dc:creator>Steven_L_Intel1</dc:creator>
      <dc:date>2005-12-08T00:25:00Z</dc:date>
    </item>
    <item>
      <title>Re: Visual Fortran Newsletter Articles</title>
      <link>https://community.intel.com/t5/Intel-Fortran-Compiler/Visual-Fortran-Newsletter-Articles/m-p/844704#M62673</link>
      <description>&lt;DIV style="margin: 0px; height: auto;"&gt;&lt;/DIV&gt;
&lt;P&gt;December 2000&lt;/P&gt;
&lt;H3&gt;Don't Touch Me There - What error 157                      (Access Violation) is trying to tell you&lt;/H3&gt;
&lt;H4&gt;Steve Lionel - Compaq Fortran Engineering&lt;/H4&gt;
&lt;P&gt;One of the more obscure error messages you can get at run                      time is Access Violation, which the Visual Fortran run-time                      library reports as error number 157. The documentation says                      that it is a "system error," meaning that it is                      detected by the operating system, but many users think they're                      being told that their system itself has a problem. In this                      article, I'll explain what an access violation is, what programming                      mistakes can cause it to occur, and how to resolve them.&lt;/P&gt;
&lt;P&gt;Windows (the 95/98/Me/NT/2000 varieties) is a 32-bit virtual                      memory operating system. The "32 bit" means that                      a memory address is 32 bits in size, potentially having over                      four billion possible addresses. "Virtual memory"                      means that not every memory address in use corresponds 1-to-1                      with a physical memory address - some may be "resident"                      in RAM and others "paged out" to a disk file. The                      other important aspect of virtual memory is that only those                      address ranges currently being used exist at all - others                      are not represented. It's like a telephone book, which has                      pages for only those names of people who live in the city.                      If a phone book had to include a space for every possible                      name, every city and town's phone book would fill rooms!&lt;/P&gt;
&lt;P&gt;When your program starts to run, Windows allocates (creates)                      just enough virtual memory to hold the static (fixed) code                      and data in the executable. As the program runs, it may ask                      to allocate additional memory, for example, through calls                      to ALLOCATE or malloc, either directly by your code or indirectly                      by the run-time library. Each allocation creates a new range                      of now-valid virtual addresses which didn't exist before.                      When the program ends, Windows automatically deallocates all                      the virtual memory the program used.&lt;/P&gt;
&lt;P&gt;Since not every possible 32-bit value represents a currently                      valid address, what happens if you try to access (read from                      or write to) an invalid address? Yes, that's right, you get                      an "Access Violation"! Probably the most common                      address involved in an access violation is zero. Because a                      zero address is typically reserved as meaning "not defined",                      Windows (and most operating systems) deliberately leaves unallocated                      the first group of addresses (page) starting at zero. This                      means that an attempt to access through an uninitialized address                      will result in an error. You can also get an access violation                      trying to access memory with a non-zero addres s when that                      memory's address range hasn't yet been allocated.&lt;/P&gt;
&lt;P&gt;Common causes of the "invalid address" type of                      access violation are:&lt;/P&gt;
&lt;P&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Mismatches in argument lists, so that data is treated                        as an address&lt;/LI&gt;
&lt;LI&gt;Out of bounds array references&lt;/LI&gt;
&lt;LI&gt;Mismatches in C vs. STDCALL calling mechanisms, causing                        the stack to become corrupted&lt;/LI&gt;
&lt;LI&gt;References to unallocated pointers&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Another type of access violation is where the address space                      exists but is protected. Usually, the address space in question                      is set up as "read only," so an attempt to write                      to it will result in an access violation. In Visual Fortran,                      the most common cause of this is passing a constant as an                      argument to a routine that then tries to modify the argument.                      Visual Fortran, as of version 6, asks the linker to put constants                      in a read-only address space. Windows NT/2000 honors this,                      so trying to modify a constant gets an error, but Windows                      95/98 (not sure of Me) does not, so the modification is allowed.                      This is why some programs that run on Windows 9x don't on                      NT/2000. (It is a violation of the standard to modify a constant                      argument.)&lt;/P&gt;
&lt;P&gt;If you are running your application in the debugger, the                      debugger will stop at the point of the access violation. You                      may need to use the Context menu in the debugger to look at                      the statements of a caller of the code where the error occurred,                      but this can usually give you a good idea of what might be                      wrong. Compare argument lists carefully and look for the mistake                      of trying to modify a constant. Rebuild with bounds and argument                      checking enabled, if it's not already on (it is by default                      in Debug configurations created with V6 and later).&lt;/P&gt;
&lt;P&gt;So now you know that when you see "Access Violation",                      Windows is trying to tell you "Don't Touch Me There".&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;Note from Steve&lt;/EM&gt; - As of December 2005, Intel Fortran does not put constants in read-only image sections.  That will be enabled in an update due in January 2006.  Current versions of the compiler do support the&lt;/P&gt;
&lt;PRE&gt;/assume:noprotect_constants&lt;/PRE&gt;
&lt;P&gt;switch which tells the compiler to pass constants in a stack temporary so that the called procedure can safely store to it, with the changes being discarded on return.&lt;/P&gt;
&lt;P&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 08 Dec 2005 00:31:06 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Fortran-Compiler/Visual-Fortran-Newsletter-Articles/m-p/844704#M62673</guid>
      <dc:creator>Steven_L_Intel1</dc:creator>
      <dc:date>2005-12-08T00:31:06Z</dc:date>
    </item>
    <item>
      <title>Doctor Fortran and the Virtues of Omission</title>
      <link>https://community.intel.com/t5/Intel-Fortran-Compiler/Visual-Fortran-Newsletter-Articles/m-p/844705#M62674</link>
      <description>&lt;P&gt;December 2000&lt;/P&gt;
&lt;P&gt;Doctor Fortran and the Virtues of OmissionSteve Lionel - Compaq Fortran Engineering&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;As I was walking up the stair &lt;BR /&gt; I met a man who wasn't there. &lt;BR /&gt; He wasn't there again today. &lt;BR /&gt; I wish, I wish he'd stay away&lt;I&gt;.&lt;BR /&gt; Hughes Mearns (1875-1965)&lt;/I&gt;&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;Up through Fortran 77, there was no Fortran standard-conforming way of calling a routine with a different number of arguments than it was declared as having. This didn't stop people from omitting arguments, but whether or not it worked was highly implementation and argument dependent. For example, you can often get away with omitting numeric scalar arguments but not CHARACTER or arguments used in adjustable array bounds expressions, as code in the called routine's "prologue" tries to reference the missing data, often resulting in an access violation (see &lt;A href="http://software.intel.com/en-us/forums/topic/275071#comment-1548436"&gt;&lt;I&gt;Don't Touch Me There&lt;/I&gt;&lt;/A&gt;.)&lt;/P&gt;
&lt;P&gt;Fortran 90 introduced the concept of optional arguments and a standard-conforming way of omitting said optional arguments. Many users eagerly seized upon this and started using the new feature, but soon got tripped up and confused because they didn't follow all of the rules the standard lays out. The Doctor is here to help.&lt;/P&gt;
&lt;P&gt;First things first. To be able to omit an argument when calling a routine, the dummy argument in the called routine must be given the OPTIONAL attribute. For example:&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;SUBROUTINE WHICH (A,B)&lt;BR /&gt; INTEGER, INTENT(OUT) :: A&lt;BR /&gt; INTEGER, INTENT(IN), &lt;B&gt;OPTIONAL&lt;/B&gt; :: B&lt;BR /&gt; ...&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;If an argument has the OPTIONAL attribute, you can test for its presence with the PRESENT intrinsic. The standard prohibits you from accessing an omitted argument, so use PRESENT to test to see if the argument is present before touching it. That part is simple.&lt;/P&gt;
&lt;P&gt;The part that people tend to miss, though, is that the use of OPTIONAL arguments means that an explicit interface for the routine is &lt;B&gt;required&lt;/B&gt; to be visible to the caller. Generally, this means an INTERFACE block (which must match the actual routine's declaration), but this rule is satisfied if you are calling a CONTAINed procedure. If you don't have an explicit interface, the compiler doesn't know that it has to pass a n "I'm not here" value (usually an address of zero) for the argument being omitted, and you could get an access violation or wrong results.&lt;/P&gt;
&lt;P&gt;An interesting aspect of OPTIONAL arguments is that it's ok to pass an omitted argument to another routine (which declares the argument as OPTIONAL) without first checking to see if it is PRESENT. The "omitted-ness" is passed along and can be tested by the other routine. What's even more interesting is that the standard allows you to pull this trick on intrinsics such as MAX, PRODUCT, etc.!&lt;/P&gt;
&lt;P&gt;There are some additional aspects of optional arguments, such as the use of keyword names in argument lists, that are worth learning about. For more information, see the section "Optional Arguments" in the Language Reference Manual. Another very important reference is the Language Reference Manual, "Determining When Procedures Require Explicit Interfaces.". The Doctor highly recommends this for your reading pleasure. There will be a quiz next week (just kidding!).&lt;/P&gt;</description>
      <pubDate>Thu, 08 Dec 2005 00:42:00 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Fortran-Compiler/Visual-Fortran-Newsletter-Articles/m-p/844705#M62674</guid>
      <dc:creator>Steven_L_Intel1</dc:creator>
      <dc:date>2005-12-08T00:42:00Z</dc:date>
    </item>
    <item>
      <title>The Perils of Real Numnbers (Part I)</title>
      <link>https://community.intel.com/t5/Intel-Fortran-Compiler/Visual-Fortran-Newsletter-Articles/m-p/844706#M62675</link>
      <description>&lt;P&gt;April 2001&lt;/P&gt;
&lt;H3&gt;The Perils of Real Numbers (Part 1)&lt;/H3&gt;
                  &lt;H4&gt;Dave Eklund&lt;BR /&gt;

                    Compaq Fortran Engineering&lt;/H4&gt;
                  &lt;P&gt;One of Fortran's greatest strengths is its ability to manipulate 
                    real numbers. It is astonishing, however, that many Fortran 
                    programmers lack even a rudimentary understanding of them. 
                    In this series, perhaps we can acquire a better understanding 
                    and, at the very least, see how some of the "experts" 
                    deal with problems.&lt;/P&gt;
                  &lt;P&gt; Let's begin by asking the simple question, "Which real 
                    numbers can be represented EXACTLY?" If I gave you a 
                    number, how would you find out if the number actually had 
                    a precise representation on any given machine?! In what follows 
                    I am going to ALWAYS use a decimal point (.) when I am discussing 
                    real (floating point) numbers, and I will NEVER use a decimal 
                    point when discussing integers. &lt;/P&gt;
                  &lt;P&gt;So the following would be integers: 
                  &lt;/P&gt;&lt;P&gt; 
                  &lt;/P&gt;&lt;BLOCKQUOTE&gt; 17&lt;BR /&gt;

                    150&lt;BR /&gt;

                    -12&lt;BR /&gt;

                    0&lt;BR /&gt;

                    1000000000000000000&lt;/BLOCKQUOTE&gt;
                  &lt;P&gt; and the following would be reals: 
                  &lt;/P&gt;&lt;P&gt; 
                  &lt;/P&gt;&lt;BLOCKQUOTE&gt;1.0&lt;BR /&gt;

                    -12.5&lt;BR /&gt;

                    .1234&lt;BR /&gt;

                    0.567&lt;BR /&gt;

                    -7.00&lt;BR /&gt;

                    3.14159265&lt;BR /&gt;

                    0.30517578125&lt;BR /&gt;

                    1000000000000000000. &lt;/BLOCKQUOTE&gt;
                  &lt;P&gt;Which of the above do you believe are EXACTLY representable 
                    as integers or as reals? Why? 
                  &lt;/P&gt;&lt;P&gt;Think POWERS OF TWO. If we start with the positive whole 
                    numbers, what we find is that both integers and real numbers 
                    are internally represented as sums of powers of 2. Now integers 
                    are easier to look at, and real numbers do have a pesky exponent 
                    field that needs to be considered, but an integer or real 
                    like "9" is the sum of 8 and 1, both of which are 
                    powers of 2 (2**3 and 2**0 respectively). The main exceptional 
                    value is zero. If you view zero as a power of two, perhaps 
                    it's time to increase your medication... 
                  &lt;/P&gt;&lt;P&gt;Now negative numbers are somewhat different. For integers, 
                    we are talking two's complement arithmetic (normally), but 
                    for real numbers we just turn on the sign bit in the real 
                    number, which the hardware designers so nicely provided. That's 
                    all well and good for whole numbers, but how about decimal 
                    fractions like .5 and .25? Well, continue to think POWERS 
                    OF TWO. Only now it's the negative powers of two. So, for 
                    example, .5 is 2.**(-1) and .25 is 2.**(-2) and so on. In 
                    point of fact .5 and .25 look identical as far as the "fraction" 
                    part of each real number is concerned, and only the exponent 
    
                changes! As powers of two, both ARE exactly representable. 
                  &lt;/P&gt;&lt;P&gt; When you REALLY need to look at numbers, there are several 
                    formats that we find useful (alphabetically):&lt;/P&gt;
                  &lt;TABLE width="50%" border="0"&gt;
                    &lt;TBODY&gt;&lt;TR&gt; 
                      &lt;TD width="20%" align="center" class="tableData"&gt;B&lt;/TD&gt;
                      &lt;TD class="tableData"&gt;Binary&lt;/TD&gt;
                    &lt;/TR&gt;
                    &lt;TR&gt; 
                      &lt;TD width="20%" align="center" class="tableData"&gt;E&lt;/TD&gt;
                      &lt;TD class="tableData"&gt;Real values with E exponents&lt;/TD&gt;
                    &lt;/TR&gt;
                    &lt;TR&gt; 
                      &lt;TD width="20%" align="center" class="tableData"&gt;F&lt;/TD&gt;
                      &lt;TD class="tableData"&gt;Real values with no exponent&lt;/TD&gt;
                    &lt;/TR&gt;
                    &lt;TR&gt; 
                      &lt;TD width="20%" align="center" class="tableData"&gt;I&lt;/TD&gt;
                      &lt;TD class="tableData"&gt;Integer values&lt;/TD&gt;
                    &lt;/TR&gt;
                    &lt;TR&gt; 
                      &lt;TD width="20%" align="center" class="tableData"&gt;O&lt;/TD&gt;
                      &lt;TD class="tableData"&gt;Octal values&lt;/TD&gt;
                    &lt;/TR&gt;
                    &lt;TR&gt; 
                      &lt;TD width="20%" align="center" class="tableData"&gt;Z&lt;/TD&gt;
                      &lt;TD class="tableData"&gt;Hexadecimal values&lt;/TD&gt;
                    &lt;/TR&gt;
                  &lt;/TBODY&gt;&lt;/TABLE&gt;
                  &lt;P&gt;My personal favorites tend to be F and Z. So let's take an 
                    up-close and personal look at some whole numbers first and 
                    then some fractions. 
                  &lt;/P&gt;&lt;P&gt;Try the following program (printing small whole numbers, 
                    both as integers and as reals): 
                  &lt;/P&gt;&lt;P&gt; 
                  &lt;PRE&gt;
      integer, parameter :: lower = 0
      integer, parameter :: upper  = 8

      do i = lower, upper 
      type 1, i, i, i 
1     format (' integer:  ', i, 1x, b, 1x, z) 
      enddo 

      do i = lower, upper 
      x = float(i) 
      type 2, x, x, x 
2     format (' Real: ', f, 1x, b33.32, 1x, z12.8) 
      enddo

      end 
					&lt;/PRE&gt;
                  &lt;/P&gt;&lt;P&gt;It produces:&lt;/P&gt;
                  &lt;PRE&gt;
 integer:            0                                 0        0
 integer:            1                                 1        1
 integer:            2                                10        2
 integer:            3                                11        3
 integer:            4                               100        4
 integer:            5                               101        5
 integer:            6                               110        6
 integer:            7                               111        7
 integer:            8                              1000        8
 Real:       0.0000000  00000000000000000000000000000000 00000000
 Real:       1.0000000  00111111100000000000000000000000 3F800000
 Real:       2.0000000  01000000000000000000000000000000 40000000
 Real:       3.0000000  01000000010000000000000000000000 40400000
 Real:       4.0000000  01000000100000000000000000000000 40800000
 Real:       5.0000000  01000000101000000000000000000000 40A00000
 Real:       6.0000000  01000000110000000000000000000000 40C00000
 Real:       7.0000000  01000000111000000000000000000000 40E00000

 Real:       8.0000000  01000001000000000000000000000000 41000000
 &lt;/PRE&gt;
                  &lt;P&gt; The integers form a nice progression of bits (look at the 
                    "b" formatted column). If we look at the reals, 
                    using "b" or "z" format, we see a similar 
                    pattern. Notice that zero is the same for both integer and 
                    real (although for a real we CAN represent -0.0). Look at 
                    2.0000000. There is only a single bit set! And it's way up 
                    in the exponent field. How can this be?&lt;/P&gt;
                  &lt;P&gt;Normally we would observe that a real number (IEEE) comprises 
                    a sign (high, left) bit, an exponent (8 bits for single precision 
                    -- real), and a fraction (the remaining, rightmost 23 bits). 
                    When we "normalize" any real number, the fraction 
                    gets shifted so that the high bit is "1" and the 
                    exponent adjusted accordingly. But if the high bit is always 
                    "1", we can elect to just discard it to save space 
                    (and add precision), and generally this is done. So the fraction 
                    is really the rightmost 23 bits PLUS a "hidden" 
                    bit of 1. For a number like 2.0, which is exactly 2.**1, the 
                    fraction is 10000000000000000000000 before we toss the hidden 
                    bit and, hence, is 00000000000000000000000 afterwards! If 
                    you look carefully, you will observe that 2.0, 4.0, and 8.0 
                    all have a zero fraction (rightmost 23 bits). But 3.0 whose 
                    fraction starts out as 110000000000000000000 becomes 10000000000000000000000 
                    after dropping the (high) hidden bit. And, of course, there 
                    are also appropriate exponent bits to the far left (perhaps 
                    discussed in more detail in a later article).&lt;/P&gt;
                  &lt;P&gt;Notice that in these real numbers there are quite a few zeros 
                    in the fraction (rightmost 23 bits). ALL small integer values 
                    will look like this! For example, let's take 42, which as 
                    an integer in binary is 101010 (2**5 + 2**3 + 2**1). The fraction 
                    before tossing the hidden bit would be 10101000000000000000000 
                    and afterwards is just 01010000000000000000000, so there are 
                    lots of zeros (still) to the right. This is a good indication 
                    that we are dealing with an "exact" value (not proof, 
                    but it happens a lot).&lt;/P&gt;
                  &lt;P&gt;Let's try another program to look at the small negative powers 
                    of 2: &lt;/P&gt;
                  &lt;PRE&gt;
      integer, parameter :: lower = 0 
      integer, parameter :: upper = 10 

      x = 1. 
      do i = lower, upper 
      x = x/2.0 
      type 2, x, x, x 
2     format(' Real: ', f25.20, 1x, b, 1x, z) 
      enddo 

      end
&lt;/PRE&gt;
                  &lt;P&gt;Notice that we used a very "wide" format -- f25.20 
                    so that we can get a better look at the "full" result 
                    (all of the nonzero digits). This is a VERY useful trick... 
                    The result is: &lt;/P&gt;
                  &lt;PRE&gt;
 Real: 0.50000000000000000000 1111110
00000000000000000000000 3F000000
 Real: 0.25000000000000000000 111110100000000000000000000000 3E800000
 Real: 0.12500000000000000000 111110000000000000000000000000 3E000000
 Real: 0.06250000000000000000 111101100000000000000000000000 3D800000
 Real: 0.03125000000000000000 111101000000000000000000000000 3D000000
 Real: 0.01562500000000000000 111100100000000000000000000000 3C800000
 Real: 0.00781250000000000000 111100000000000000000000000000 3C000000
 Real: 0.00390625000000000000 111011100000000000000000000000 3B800000
 Real: 0.00195312500000000000 111011000000000000000000000000 3B000000
 Real: 0.00097656250000000000 111010100000000000000000000000 3A800000
 Real: 0.00048828125000000000 111010000000000000000000000000 3A000000
					&lt;/PRE&gt;
                  &lt;P&gt;So these are the first few negative powers of two. Just like 
                    the positive powers from the first example, these all have 
                    a zero fraction (after tossing the hidden bit). Notice that 
                    the actual values in f format all end in "5". And 
                    the 5 keeps moving to the next column. This means that ANY 
                    fractional sum will also end in 5. The consequence is that 
                    if you provide a fraction whose last nonzero digit is NOT 
                    5 (like 0.000276000000) it CANNOT be exactly represented as 
                    the sum of any negative powers of two! This is a VERY important 
                    point. You say, "So what." Well, this means that 
                    lots of "common" numbers are not exactly representable, 
                    like 0.10000000 and 0.200000000000000, although 0.500 IS exactly 
                    representable. And while some fractions ending in 5 CAN be 
                    represented, many cannot. Consider 5.0 divided by powers of 
                    10.: &lt;/P&gt;
                  &lt;PRE&gt;
      do i = 1,10 
	  x = 5.0/(10.**i) 
      type 1, x, x 
1     format (1x, f40.30, 1x, b) 
      enddo 
      end 
&lt;/PRE&gt;
                  &lt;P&gt;which produces: 
                  &lt;PRE&gt;
   0.500000000000000000000000000000 111111000000000000000000000000 
   0.050000000745058059692382812500 111101010011001100110011001101 
   0.004999999888241291046142578125 111011101000111101011100001010 
   0.000500000023748725652694702148 111010000000110001001001101111 
   0.000049999998736893758177757263 111000010100011011011100010111 
   0.000004999999873689375817775726 110110101001111100010110101100 
   0.000000499999998737621353939176 110101000001100011011110111101 
   0.000000050000000584304871154018 110011010101101011111110010101 
   0.000000004999999969612645145389 110001101010111100110001110111 
   0.000000000499999985859034268287 110000000010010111000001011111 
                  &lt;/PRE&gt;
                  &lt;/P&gt;&lt;P&gt;Hey, only that first one is EXACT! Notice that the others, 
                    while "close" to .05, .005, .0005. etc. are not 
                    EXACTLY .05, .005, .0005 etc. Some are a little bigger, some 
                    smaller (popularly called "nines disease"). In fact, 
                    with the exception of 0.500, all the others CANNOT be exactly 
                    represented as sums of powers of 2! Observe, however, that 
                    only when a wide format is used is this apparent. With a smaller 
                    format width, most of these will look just fine due to rounding! 

                  &lt;/P&gt;
                  &lt;P&gt;We are finally at the point where we can decide which numbers 
                    are EXACTLY representable: &lt;/P&gt;
                  &lt;DL&gt; 
                    &lt;DT&gt;1.0&lt;/DT&gt;
                    &lt;DD&gt;Yes (any small integer is fine)&lt;/DD&gt;
                    &lt;DT&gt;-12.5&lt;/DT&gt;
                    &lt;DD&gt; Yes, small integer + negative power of 2 (.5)&lt;/DD&gt;
                    &lt;DT&gt;.1234&lt;/DT&gt;
                    &lt;DD&gt;No, last fractional digit is not 5&lt;/DD&gt;
                    &lt;DT&gt;0.567&lt;/DT&gt;
                    &lt;DD&gt;No, last fractional digit is not 5&lt;/DD&gt;
                    &lt;DT&gt;-7.00&lt;/DT&gt;
                    &lt;DD&gt;Yes, small integer&lt;/DD&gt;
                    &lt;DT&gt;3.14159265&lt;/DT&gt;
                    &lt;DD&gt;Cannot easily tell (last fractional digit is 5) [Is actually 
                      NOT representable]&lt;/DD&gt;
                    &lt;DT&gt;0.000030517578125&lt;/DT&gt;
                    &lt;DD&gt;Cannot easily tell (last fractional digit is 5) [IS actually 
                      representable]&lt;/DD&gt;
                    &lt;DT&gt;1000000000000000000.&lt;/DT&gt;
                    &lt;DD&gt;Maybe (small integer, for some value of "small")&lt;/DD&gt;
                  &lt;/DL&gt;
                  &lt;P&gt;To decide the last 3 values, just try the following: &lt;/P&gt;
                  &lt;PRE&gt;
      type 1, 3.14159265 
      type 1, 0.000030517578125 
      type 1, 1000000000000000000. 
      type 1, 1000000000000000000.D0 
1     format(f60.30) 
      end 
&lt;/PRE&gt;
                  &lt;P&gt;and observe: &lt;/P&gt;
                  &lt;PRE&gt;
                   3.141592741012573242187500000000 
                   0.000030517578125000000000000000 
  999999984306749440.000000000000000000000000000000 
 1000000000000000000.000000000000000000000000000000 
                  &lt;/PRE&gt;
                  &lt;P&gt;We see that the closest representable real number to 3.14159265 
                    is actually 3.14159274...; 0.000030517578125 CAN be represented 
                    exactly (it is, in fact, a power of 2); and while 1000000000000000000. 
                    cannot be represented as a real number (can you explain this 
                    more precisely?), it CAN be represented as a double-precision 
                    number (more than twice as many fraction bits). Once again, 
                    notice the use of an even wider format to help get a better 
                    look at the numbers! Keep in mind that for a statement like: 
                  &lt;/P&gt;
                  &lt;PRE&gt;type 1, 3.14159265 &lt;/PRE&gt;
                  &lt;P&gt;the Fortran compiler and runtime library will do a "double 
                    conversion." The compiler will convert the string 3.14159265 
                    into a real value, and the runtime system will then convert 
                    back to a string (under format control) to produce 3.141592741012573242187500000000. 
                    Neither of these conversions is easy, but thankfully the Fortran 
                    compiler and runtime library perform all of this heavy lifting! 
                  &lt;/P&gt;
                  &lt;P&gt;As a quiz for next time, consider the following program: 
                  &lt;/P&gt;
                  &lt;PRE&gt;
      i = 1000000013 
      x = i 
      type 1, i, x 
1     format(1x,i,1x,f20.5) 
      end 
	&lt;/PRE&gt;
                  &lt;P&gt;It gives: &lt;/P&gt;
                  &lt;PRE&gt;1000000013    1000000000.00000 &lt;/PRE&gt;
                  &lt;P&gt; Try to figure out where the "unlucky 13" went!? 
 
                   Why does it come back if we use /real_size:64? Look for the 
                    answers in Part II of this article in a future newsletter 
                    issue.&lt;/P&gt;</description>
      <pubDate>Thu, 08 Dec 2005 00:49:21 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Fortran-Compiler/Visual-Fortran-Newsletter-Articles/m-p/844706#M62675</guid>
      <dc:creator>Steven_L_Intel1</dc:creator>
      <dc:date>2005-12-08T00:49:21Z</dc:date>
    </item>
    <item>
      <title>Re: Visual Fortran Newsletter Articles</title>
      <link>https://community.intel.com/t5/Intel-Fortran-Compiler/Visual-Fortran-Newsletter-Articles/m-p/844707#M62676</link>
      <description>&lt;DIV style="margin: 0px; height: auto;"&gt;&amp;nbsp;&lt;/DIV&gt;

&lt;P&gt;April 2001 (Edited February 22, 2016)&lt;/P&gt;

&lt;H3&gt;Win32 Corner - ShellExecute&lt;/H3&gt;

&lt;H4&gt;Steve Lionel&lt;BR /&gt;
	Visual Fortran Engineering&lt;/H4&gt;

&lt;P&gt;&lt;EM&gt;Win32 Corner&lt;/EM&gt; is a new feature of the newsletter that illustrates how to use Win32 API routines to do commonly requested tasks.&lt;/P&gt;

&lt;P&gt;The ShellExecute API routine is handy for opening a web page, or any document using its natural editing tool. It's equivalent to right clicking on a file and selecting Open - or you can also choose the default action (whatever is listed first), Print or Edit. I've found it most useful for opening a web page with the user's default browser.&lt;/P&gt;

&lt;P&gt;Open shellexecute.f90 (attached) and reference the numbered comments (!!1, etc.) below:&lt;/P&gt;

&lt;OL&gt;
	&lt;LI&gt;ShellExecute is part of the Shell API and is defined in module SHELL32. You could also USE IFWIN.&lt;/LI&gt;
	&lt;LI&gt;The hWnd argument is the handle of the owner's window. In a Console Application, NULL is the thing to use, but in a Windows Application you might want the main window, and in QuickWin, use GETHWNDQQ(QWIN$FRAMEWINDOW).&lt;/LI&gt;
	&lt;LI&gt;lpOperation (referred to as lpVerb in newer versions of the MS documentation) is a C-string that says what to do with the file. "open" is what you'll want most often, but you could also specify "edit" or "print". If the argument is null, then the "default action" is used.&lt;/LI&gt;
	&lt;LI&gt;lpFile is the thing we want to open. It could be an ordinary file, or a URL. Note the NUL-termination to make it a C-string.&lt;/LI&gt;
	&lt;LI&gt;If we were opening (running) an executable file, command parameters would go here as a NUL-terminated string.&amp;nbsp;&lt;SPAN style="font-size: 13.008px; line-height: 19.512px;"&gt;Since the interface for ShellExecute has ALLOW_NULL as an attribute for this argument, specifying NULL is the way to omit it.&lt;/SPAN&gt;&lt;/LI&gt;
	&lt;LI&gt;You can specify a default directory if you want as a NUL-terminated character string. Again, NULL omits it.&lt;/LI&gt;
	&lt;LI&gt;nShowCmd specifies how you want the window to appear. SW_SHOWNORMAL is the standard behavior, but you could also specify minimized, maximized and whether or not to hide the active window.&lt;/LI&gt;
	&lt;LI&gt;If ShellExecute returns a value greater than 32, it succeeded, otherwise an error occurred. Note that ShellExecute returns immediately - it does not wait for the opened application to finish.&lt;/LI&gt;
&lt;/OL&gt;

&lt;P&gt;Try building and running shellexecute.f90 as a "Fortran Console Application". Enter a favorite URL, such as &lt;A href="http://www.intel.com/" target="_blank"&gt;http://www.intel.com/&lt;/A&gt;, or the path to a file on your system, then watch it open!&lt;/P&gt;

&lt;P&gt;For more information on ShellExecute, look it up in the MSDN Library online.&lt;/P&gt;</description>
      <pubDate>Thu, 08 Dec 2005 00:53:00 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Fortran-Compiler/Visual-Fortran-Newsletter-Articles/m-p/844707#M62676</guid>
      <dc:creator>Steven_L_Intel1</dc:creator>
      <dc:date>2005-12-08T00:53:00Z</dc:date>
    </item>
    <item>
      <title>Doctor Fortran Gets Explicit</title>
      <link>https://community.intel.com/t5/Intel-Fortran-Compiler/Visual-Fortran-Newsletter-Articles/m-p/844708#M62677</link>
      <description>&lt;P&gt;April 2001&lt;/P&gt;
&lt;H3&gt;Doctor Fortran Gets Explicit!&lt;/H3&gt;
                  &lt;H4&gt;Steve Lionel&lt;BR /&gt;


                    Visual Fortran Engineering&lt;/H4&gt;
                  &lt;P&gt;In our last issue, 
                    the Good Doctor covered the topic of optional arguments, noting 
                    that an explicit interface was required. Since explicit interfaces 
                    seem to be a common point of confusion for those new to Fortran 
                    90, (and some not so new), we'll cover this subject in more 
                    detail.&lt;/P&gt;
                  &lt;P&gt;In Fortran terminology, an &lt;I&gt;interface&lt;/I&gt; is a declaration 
                    of some other procedure that supplies details, including:&lt;/P&gt;
                  &lt;P&gt;&lt;/P&gt;&lt;UL&gt;
                    &lt;LI&gt;Name of the procedure&lt;/LI&gt;
                    &lt;LI&gt;Whether it is a subroutine or function&lt;/LI&gt;
                    &lt;LI&gt;If a function, the result type&lt;/LI&gt;
                    &lt;LI&gt;Number, names, shapes and types of arguments&lt;/LI&gt;
                    &lt;LI&gt;Argument attributes, such as OPTIONAL and INTENT&lt;/LI&gt;
                  &lt;/UL&gt;
                  &lt;P&gt;Prior to Fortran 90, the only kind of interface was &lt;I&gt;implicit&lt;/I&gt;, 
                    meaning that the compiler assumed that a routine call matched 
                    the actual routine - all you could do was specify the type 
                    of a function. The standard required that "the actual 
                    arguments ... must agree in order, number and type with the 
                    corresponding dummy arguments in the dummy argument list of 
                    the referenced subroutine." Not only was this error-prone, 
                    but it made it difficult to support desirable features such 
                    as optional arguments and array function results.&lt;/P&gt;
                  &lt;P&gt;The &lt;I&gt;explicit interface&lt;/I&gt;, introduced with Fortran 90, 
                    allows you to tell the compiler many more details about the 
                    called routine. This additional information allows a compiler 
                    to check for consistency in routine calls and also enables 
                    features such as optional arguments that depend on changes 
                    in the way the routine is called. In most cases, an explicit 
                    interface consists of an INTERFACE block which contains a 
                    copy of the called routine's declaration. For example:&lt;/P&gt;
                  &lt;P&gt;&lt;/P&gt;&lt;BLOCKQUOTE&gt; INTERFACE&lt;BR /&gt;


                    SUBROUTINE MYSUB (A,B)&lt;BR /&gt;


                    INTEGER ::A&lt;BR /&gt;


                    REAL, OPTIONAL, INTENT(IN) :: B&lt;BR /&gt;


                    END SUBROUTINE MYSUB&lt;BR /&gt;


                    END INTERFACE&lt;/BLOCKQUOTE&gt;
                  &lt;P&gt;An INTERFACE block can go in the declaration section of a 
                    program unit, or can be made visible by use-association (in 
                    a MODULE that is USEd) or host-association (a program unit 
                    that contains the one that needs to see the interface.) An 
                    explicit interface also exists, without an INTERFACE block, 
                    if the routine is a contained procedure or is a module procedure 
                    in the enclosing module or a module that is use-associated 
                    (and the module procedure has not
 been made PRIVATE).&lt;/P&gt;
                  &lt;P&gt;While there are many good reasons why you should always use 
                    explicit interfaces, including better error checking and improved 
                    run-time performance (avoiding unnecessary copy-in, copy-out 
                    code), there are some cases where you are required to have 
                    an explicit interface visible. These are:&lt;/P&gt;
                  &lt;P&gt;&lt;/P&gt;&lt;UL&gt;
                    &lt;LI&gt; If the procedure has any of the following: 
                      &lt;UL&gt;
                        &lt;LI&gt;An optional dummy argument&lt;/LI&gt;
                        &lt;LI&gt; A dummy argument that is an assumed-shape array, 
                          a pointer, or a target&lt;/LI&gt;
                        &lt;LI&gt;A result that is array-valued or a pointer (functions 
                          only)&lt;/LI&gt;
                        &lt;LI&gt;A result whose length is neither assumed nor a constant 
                          (character functions only) 
                      &lt;/LI&gt;&lt;/UL&gt;
                    &lt;/LI&gt;
                    &lt;LI&gt;If a reference to the procedure appears as follows: 
                      &lt;UL&gt;
                        &lt;LI&gt;With an argument keyword&lt;/LI&gt;
                        &lt;LI&gt;As a reference by its generic name&lt;/LI&gt;
                        &lt;LI&gt;As a defined assignment (subroutines only)&lt;/LI&gt;
                        &lt;LI&gt;In an expression as a defined operator (functions 
                          only)&lt;/LI&gt;
                        &lt;LI&gt;In a context that requires it to be pure&lt;/LI&gt;
                      &lt;/UL&gt;
                    &lt;/LI&gt;
                    &lt;LI&gt;If the procedure is elemental &lt;/LI&gt;
                  &lt;/UL&gt;
                  &lt;P&gt;For more information on explicit interfaces, see Chapter 
                    8 of the Intel 
                    Fortran Language Reference Manual.&lt;/P&gt;
                  &lt;P&gt;In closing, Doctor Fortran prescribes using explicit interfaces 
                    throughout your application, ideally with an appropriate INTENT 
                    attribute specified for each argument. It may be a bit more 
                    typing up front, but it will quickly pay off in smoother development 
                    and, possibly, faster execution.&lt;/P&gt;&lt;P&gt;[Revisiting this topic in 2008, my advice has changed. You should avoid writing INTERFACE blocks for Fortran code. Instead, put your subroutines and functions in modules, or make them CONTAINed procedures if they'll be called from a limited context. This provides the explicit interface without needing to retype the declarations. - Steve]&lt;/P&gt;&lt;P&gt;[I revisited this topic in 2012 - see &lt;A target="_blank" href="http://software.intel.com/en-us/blogs/2012/01/05/doctor-fortran-gets-explicit-again/"&gt;Doctor Fortran Gets Explicit - Again!&lt;/A&gt;]&lt;SPAN class="time_text"&gt;&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 08 Dec 2005 00:58:37 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Fortran-Compiler/Visual-Fortran-Newsletter-Articles/m-p/844708#M62677</guid>
      <dc:creator>Steven_L_Intel1</dc:creator>
      <dc:date>2005-12-08T00:58:37Z</dc:date>
    </item>
    <item>
      <title>The Perils of Real Numbers (Part 2)</title>
      <link>https://community.intel.com/t5/Intel-Fortran-Compiler/Visual-Fortran-Newsletter-Articles/m-p/844709#M62678</link>
      <description>&lt;P&gt;June 2001&lt;/P&gt;
                  &lt;H3&gt;&lt;A name="Eklund" target="_blank"&gt;&lt;/A&gt;The Perils of Real Numbers (Part 2)&lt;/H3&gt;
                  &lt;H4&gt;Dave Eklund&lt;BR /&gt;



                    Compaq Fortran Engineering&lt;/H4&gt;
                  &lt;P&gt; In Part 1 we offered the 
                    following problematical program: &lt;/P&gt;
                  &lt;PRE&gt;
      i = 1000000013 
      x = i
      type 1, i, x
1     format(1x,i,1x,f20.5)
      end&lt;/PRE&gt;
                   which gives: 
                  &lt;PRE&gt; 1000000013     1000000000.00000&lt;/PRE&gt;
                   
                  &lt;P&gt; Where did the "unlucky 13" go!? Why does it come 
                    back if we use /real_size:64? Let's look a little more closely 
                    at the distribution of integer and real numbers. You will 
                    recall that any integer is represented simply as the sum of 
                    POSITIVE (and zero) powers of 2, and there is no exponent 
                    field. This results in a flat distribution of values from 
                    -2**31 all the way up to 2**31-1, or -2147483648 up to 2147483647. 
                    Every integer value between these end points is included. 
                    There is only one value of zero. There is one value which 
                    does not have a counterpart of opposite sign (-2**31). Notice 
                    that this means that all of the integer values are "evenly 
                    spaced" across the entire range. &lt;/P&gt;
                  &lt;P&gt; The same general statements hold for all the other integer 
                    types (KIND = 1, 2, and 8 or their non-standard namings: integer*1, 
                    integer*2 and integer*8). All evenly spaced, and no exponent 
                    field. Not having an exponent field means, in effect, that 
                    there are 31 contiguous bits of "value" in an integer, 
                    whereas there are only 24 such bits in a real number (the 
                    23 fraction bits and the hidden bit). In a real number the 
                    rest of the bits are sign (1 bit) and exponent (8 bits). &lt;/P&gt;
                  &lt;P&gt; So let's look at what whole numbers we can represent as 
                    a real number. Well, we already know that we can represent 
                    any "small" whole number. In fact there is no difficulty 
                    whatsoever representing any whole number up to 2**24. But 
                    then something unusual happens. Take the following program: 
                  &lt;/P&gt;
                   
                  &lt;PRE class="FtnCode"&gt;
    integer :: two_24 = 2**24
	  
    do k = -2, 2
    i = two_24 + k
    type 1, i, i, float(i), float(i), float(i)
1   format(i9,1x,z9,1x,f12.1,1x,b33.32,1x,z)
    enddo

    end&lt;/PRE&gt;
                  &lt;P&gt;The program prints the whole numbers just before and after 
                    2**24 as integers and as real numbers. The result is shown 
                    below: &lt;/P&gt;
                  &lt;PRE class="FtnCodeSmall"&gt;
Integer:   in hex:  Real number:      Real in binary: 

16777214    FFFFFE   16777214.0  01001011011111111111111111111110 
16777215    FFFFFF   16777215.0  01001011011111111111111111111111
16777216   1000000   16777216.0  01001011100000000000000000000000
16777217   1000001   16777216.0  01001011100000000000000000000000 
16777218   1000002   16777218.0  01001011100000000000000000000001 

&lt;/PRE&gt;
                  &lt;P&gt; While we had no difficulty representing the value 2**24+1 
                    as an integer, it was quite impossible as a real number. The 
                    integer value in hex is: 1000001 -- notice that the first 
                    and last "1" bits are 25 bits apart! And this is 
                    not possible with the 24-bit fraction field of the real number! 
                    Hence 16777217 is the first whole number that we cannot represent 
                    as a real. Looked at another way, 16777215 is the last "odd" 
                    whole number that can be represented as a (single precision) 
                    real. Trivia buffs, rejoice! &lt;/P&gt;
                  &lt;P&gt; From 2.**24 up to 2.**25 we can only represent every other 
                    whole number (all the even ones) -- we step by two. From 2.**25 
                    up to 2.**26 we can represent every fourth whole number (all 
                    those evenly divisible by 4.). And so it goes. By the time 
                    we get up to 1000000013. (the number in the first example 
                    above), the two closest representable real numbers are: 1000000000. 
                    (4E6E6B28 in hex) and 1000000064. (4E6E6B29 in hex) which 
                    are 64. apart! &lt;/P&gt;
                  &lt;P&gt; The thing to remember is that as the real numbers get larger, 
                    they get further and further apart! That low order bit in 
                    the fraction gets to represent larger and larger "steps" 
                    between adjacent numbers. The "step size" is directly 
                    determined by the exponent field value. You will find that 
                    real numbers are really "dense" near zero. In fact 
                    very close to 50% of the real numbers lie between -1.0 and 
                    1.0! The same is true for double precision. With double precision 
                    instead of 23 fraction bits (and a hidden bit) we have 52 
                    bits (and a hidden bit). This allows us to express all the 
                    whole numbers up to 2**53, but not 2**53+1 . This is why /real_size:64 
                    causes the original example to "work" (not lose 
                    the unlucky 13)! &lt;/P&gt;
                  &lt;P&gt; In fact, since double precision has 53 fraction bits, ANY 
                    32-bit integer value can be represented EXACTLY as a double 
                    precision value. Similarly any integer(kind=2), which is a 
                    16-bit integer, can be represented EXACTLY as a real (24 covers 
                    16 just as 53 covers 32!). &lt;/P&gt;
                  &lt;P&gt; Does this mean that real numbers are "less precise" 
                    as we get further from zero? Curiously enough, the answer 
                    is no. While the representable numbers are further apart, 
                    they still have exactly the same number of "significant 
                    bits" -- 24 or 53 for real and double precision respectively. 
                    Significant bits? What about significant dights? When we talk 
                    about "significance", we are talking about the number 
                    of leading non-zero bits (or digits) that are known to be 
                    "present" or fully representable. Remember that 
                    we were able to express 16777216 
but not 16777217 as a real? 
                    Well, the 1677721 part (24 bits, 7 digits) were significant, 
                    but that last digit, alas, is imprecise and cannot be represented 
                    in the real number format. For those who love the details, 
                    since it takes log_base2(10) bits to represent any 1 digit 
                    (3.321928 bits per digit), then 24 bits gives us 7.224720 
                    digits--or 7 significant digits. And for double precision 
                    53 bits gives us 53.*LOG10(2.) or 15.95459 digits -- 15 significant 
                    digits (nearly 16). &lt;/P&gt;
                  &lt;P&gt; So you are saying that no matter what the real number, there 
                    are always 7 significant digits? Well, yes and no (nobody 
                    ever said this was simple!). There are three major exceptions: 
                    denormalized numbers, +-Infinity, and NaN (Not a Number). 
                    All of these anomolies are recent arrivals on the hardware 
                    scene. So recent, in fact, that the Fortran Standard does 
                    NOT require them, nor pin down their behavior! &lt;/P&gt;
                  &lt;P&gt; For a long time hardware designers were content with integer 
                    and then real data types and ever faster computers to manipulate 
                    them. But there were those who wanted more; those who were 
                    not content that dividing by zero caused their programs to 
                    ABEND (die for you youngsters). Those who wanted to be able 
                    to express 1.0/0.0; those who could visualize 0.0/0.0 (NOT 
                    to be confused with visionaries). Ah, what evil lurks... And 
                    so there came to be the IEEE Standard for Binary Floating-Point 
                    Arithmetic or ANSI/IEEE Std 754-1985. &lt;/P&gt;
                  &lt;P&gt; In this standard you would find definitions of number formats, 
                    basic operations, conversions, exceptions, traps, rounding, 
                    etc. Most modern machines provide hardware (and software) 
                    that conform to this standard. Portability, efficiency and 
                    safety are some of the most important stated goals of this 
                    standard. However, the introduction of +-Infinity and NaN 
                    brought a whole new set of possibilities and problems. &lt;/P&gt;
                  &lt;P&gt; Let's start with Infinity. In the old days there were two 
                    pretty easy ways to get a program to die--divide by zero, 
                    or overflow (multiply two very large numbers together, for 
                    example). Each of these is a limitation of the "range" 
                    of possible result values. If you cannot represent a value 
                    of "Infinity", what result value should be given 
                    to a divide by zero?! Well, there were two schools of thought. 
                    Some wanted their program to die (division by zero is ALWAYS 
                    a mistake that was not checked for in MY algorithm). &lt;/P&gt;
                  &lt;P&gt; Others wanted to "keep on trucking" (you simply 
                    cannot just die after 3000 hours of running MY program!) with 
                    some artificial, but specified, value as the result. While 
                   
 the latter group wanted "non-stop" computing, they 
                    also wanted some indication that their final results might 
                    be tainted. They successfully lobbied for special values: 
                    Infinity, -Infinity and NaN, and a "standard" treatment 
                    of these values in subsequent arithmetic computations and 
                    comparisons. So, for example if the user:&lt;/P&gt;
                  &lt;TABLE width="100%" border="1"&gt;
                    &lt;TBODY&gt;&lt;TR&gt; 
                      &lt;TH width="100" class="tableDataHeader"&gt;Computes&lt;/TH&gt;
                      &lt;TH class="tableDataHeader"&gt;The 
                        result is&lt;/TH&gt;
                    &lt;/TR&gt;
                    &lt;TR&gt; 
                      &lt;TD class="tableData"&gt;2.0 
                        * 4.0&lt;/TD&gt;
                      &lt;TD class="tableData"&gt;8.0 
                        (usually, "quality of implementation" issue!)&lt;/TD&gt;
                    &lt;/TR&gt;
                    &lt;TR&gt; 
                      &lt;TD class="tableData"&gt;10.0 
                        / 0.0&lt;/TD&gt;
                      &lt;TD class="tableData"&gt;Infinity&lt;/TD&gt;
                    &lt;/TR&gt;
                    &lt;TR&gt; 
                      &lt;TD class="tableData"&gt;-5.0 
                        / 0.0&lt;/TD&gt;
                      &lt;TD class="tableData"&gt;-Infinity&lt;/TD&gt;
                    &lt;/TR&gt;
                    &lt;TR&gt; 
                      &lt;TD class="tableData"&gt;0.0 
                        / 0.0&lt;/TD&gt;
                      &lt;TD class="tableData"&gt;NaN 
                        (division by zero does NOT always give Infinity!)&lt;/TD&gt;
                    &lt;/TR&gt;
                    &lt;TR&gt; 
                      &lt;TD class="tableData"&gt;0.0 
                        ==-0.0&lt;/TD&gt;
                      &lt;TD class="tableData"&gt;.TRUE.&lt;/TD&gt;
                    &lt;/TR&gt;
                    &lt;TR&gt; 
                      &lt;TD class="tableData"&gt;Infinity 
                        * 0.0&lt;/TD&gt;
                      &lt;TD class="tableData"&gt;NaN 
                        (can you just imagine the debate over this one!)&lt;/TD&gt;
                    &lt;/TR&gt;
                    &lt;TR&gt; 
                      &lt;TD class="tableData"&gt;Infinity 
                        - Infinity&lt;/TD&gt;
                      &lt;TD class="tableData"&gt;NaN&lt;/TD&gt;
                    &lt;/TR&gt;
                    &lt;TR&gt; 
                      &lt;TD class="tableData"&gt;Infinity 
                        / Infinity&lt;/TD&gt;
                      &lt;TD class="tableData"&gt;NaN&lt;/TD&gt;
                    &lt;/TR&gt;
                    &lt;TR&gt; 
                      &lt;TD class="tableData"&gt;1.0 
                        / Infinity&lt;/TD&gt;
                      &lt;TD class="tableData"&gt;0.0&lt;/TD&gt;
                    &lt;/TR&gt;
                    &lt;TR&gt; 
                      &lt;TD class="tableData"&gt;-1.0 
                        / Infinity&lt;/TD&gt;
                      &lt;TD class="tableData"&gt;-0.0&lt;/TD&gt;
                    &lt;/TR&gt;
                    &lt;TR&gt; 
                      &lt;TD class="tableData"&gt;NaN 
                        * 3.0&lt;/TD&gt;
                      &lt;TD class="tableData"&gt;NaN&lt;/TD&gt;
                    &lt;/TR&gt;
                    &lt;TR&gt; 
                      &lt;TD class="tableData"&gt;NaN 
                        == NaN&lt;/TD&gt;
                      &lt;TD class="tableData"&gt;.FALSE. 
                        (optimizing compilers love this one...)&lt;/TD&gt;
                    &lt;/TR&gt;
                    &lt;TR&gt; 
                      &lt;TD class="tableData"&gt;NaN 
                 
       /=NaN&lt;/TD&gt;
                      &lt;TD class="tableData"&gt;.TRUE. 
                        (... and this one, too!)&lt;/TD&gt;
                    &lt;/TR&gt;
                  &lt;/TBODY&gt;&lt;/TABLE&gt;
                  &lt;P&gt; This standard also defined SQRT, but NOT any of the intrinsic 
                    functions like SIN, COS, TAN, SUM, PRODUCT, etc. The result 
                    of all of this was that many programs could just keep running, 
                    producing +-Infinity and NaN as they went, and not particularly 
                    worry about dividing by zero or the aftermath (pun intended!). 
                    And these values would tend to propagate themselves whenever 
                    they are used. You CAN "get rid of" an Infinity 
                    if all you do is to use it as a divisor (producing zero), 
                    but NaN is really hard to "get rid of". In fact, 
                    about the only way to constructively eliminate a NaN is to 
                    do something like: &lt;/P&gt;
                  &lt;PRE&gt;
IF(ISNAN(X)) THEN
! Replace X with something else
! or use better/other algorithm, etc.
ENDIF&lt;/PRE&gt;
                  &lt;P&gt;Ah, but much was left undefined. For example, what result 
                    would you like to produce for SIN(X) where X is Infinity? 
                    As you know, SIN normally has a range between -1. and 1., 
                    so should we return Infinity? Would NaN be better? How about 
                    a more traditional "DOMAIN error" for the intrinsic 
                    function? And if intrinsic functions are not enough trouble, 
                    how about comparisons? For example while (Infinity .GT. 17.0) 
                    is .TRUE. (defined that way), it might not be so obvious that 
                    (NaN .EQ. NaN) is .FALSE. or that (Infinity .GT. NaN) is .FALSE. 
                    There is a whole new algebra, but only defined for primitive 
                    arithmetic and comparison operations (this IS a hardware standard, 
                    after all!). Don't even think about COMPLEX numbers such as: 
                    (-Infinity, NaN)... &lt;/P&gt;
                  &lt;P&gt; In order to represent Infinity and NaN, the IEEE standard 
                    chose to make all reals having the largest exponent value 
                    (all 1's) "reserved". If the exponent is all 1's 
                    and the fraction is zero, we have an Infinity. The sign bit 
                    is relevant, so there is one value for +Infinity (7F800000 
                    in hex) and one value for -Infinity (FF800000). If the exponent 
                    is all 1's and the fraction is ANY non-zero value, then this 
                    is a NaN. Notice that there are many different values for 
                    NaN. There are even two different kinds of NaN, Quiet and 
                    Signaling, but this distinction is so esoteric for Fortran 
                    that if you understand the difference and make use of it in 
                    your Fortran programs, then you can send your resume to us... 
                  &lt;/P&gt;
&lt;P&gt;Continued in next post&lt;/P&gt;</description>
      <pubDate>Thu, 08 Dec 2005 02:14:27 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Fortran-Compiler/Visual-Fortran-Newsletter-Articles/m-p/844709#M62678</guid>
      <dc:creator>Steven_L_Intel1</dc:creator>
      <dc:date>2005-12-08T02:14:27Z</dc:date>
    </item>
    <item>
      <title>Re: The Perils of Real Numbers (Part 2, contd.)</title>
      <link>https://community.intel.com/t5/Intel-Fortran-Compiler/Visual-Fortran-Newsletter-Articles/m-p/844710#M62679</link>
      <description>&lt;P&gt;(Continued from previous post)&lt;/P&gt; 
                 &lt;P&gt; Finally, a denormalized number is one where the exponent 
                    field is completely zero and the fraction is non-zero. These 
                    are the smallest of the finite numbers (both positive and 
                    negative). The very smallest positive (non-zero) number is 
                    just 00000001 (in hex). It has only one significant bit (not 
                    even one digit!) since the denormalized range does NOT use 
                    a hidden bit. Generally speaking the denormalized numbers 
                    have fewer significant digits than ANY normalized number, 
                    and the smaller the denormalized number, the fewer its significant 
                    digits. These are the ONLY real numbers with fewer than 7 
                    significant digits. If you happen to produce and use values 
                    in this range, your results may not be as accurate as you 
                    might normally expect. &lt;/P&gt;
                  &lt;P&gt; Quiz for next time (I hear the crowds cheering for MORE!). 
                    Change the last program above so that the DO loop runs from 
                    -2 to 15 (instead of from -2 to 2). &lt;/P&gt;
                  &lt;P&gt; It then generates:&lt;/P&gt;
                  &lt;PRE&gt;
integer:    in hex: Real number:     Real in binary:

16777214    FFFFFE   16777214.0  01001011011111111111111111111110 
16777215    FFFFFF   16777215.0  01001011011111111111111111111111
16777216   1000000   16777216.0  01001011100000000000000000000000 
16777217   1000001   16777216.0  01001011100000000000000000000000
16777218   1000002   16777218.0  01001011100000000000000000000001
16777219   1000003   16777220.0  01001011100000000000000000000010
16777220   1000004   16777220.0  01001011100000000000000000000010
16777221   1000005   16777220.0  01001011100000000000000000000010
16777222   1000006   16777222.0  01001011100000000000000000000011
16777223   1000007   16777224.0  01001011100000000000000000000100 
16777224   1000008   16777224.0  01001011100000000000000000000100 
16777225   1000009   16777224.0  01001011100000000000000000000100
16777226   100000A   16777226.0  01001011100000000000000000000101 
16777227   100000B   16777228.0  01001011100000000000000000000110 
16777228   100000C   16777228.0  01001011100000000000000000000110 
16777229   100000D   16777228.0  01001011100000000000000000000110 
16777230   100000E   16777230.0  01001011100000000000000000000111
16777231   100000F   16777232.0  01001011100000000000000000001000
&lt;/PRE&gt;
                  &lt;P&gt; Glance down the Real number column. Notice that the last 
                    few digits are: &lt;/P&gt;
                  &lt;P&gt; 14, 15, 16, 16, 18, 20, 20, 20, 22, 24, 24, 24, 26, 28, 
                    28, 28, 30, 32. Explain why there are THREE values of 24, 
                    then ONE 26, then THREE values of 28, then ONE 30; this pattern 
                    will continue (for a long time). Why not 14, 15, 16, 16, 18, 
                    18, 20, 20, 22, 22, 24, 24, 26, 26, 28, 28, 30, 30 instead?! 
                    Extra credit: what "simple" change can our expert 
                    Fortran developer make to produce the latter sequence instead 
                    of the former? Hint: any of you economists out there ever 
                    heard of Banker's Rounding?! &lt;/P&gt;</description>
      <pubDate>Thu, 08 Dec 2005 02:15:16 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Fortran-Compiler/Visual-Fortran-Newsletter-Articles/m-p/844710#M62679</guid>
      <dc:creator>Steven_L_Intel1</dc:creator>
      <dc:date>2005-12-08T02:15:16Z</dc:date>
    </item>
    <item>
      <title>Re: Visual Fortran Newsletter Articles</title>
      <link>https://community.intel.com/t5/Intel-Fortran-Compiler/Visual-Fortran-Newsletter-Articles/m-p/844711#M62680</link>
      <description>&lt;DIV style="margin: 0px; height: auto;"&gt;&amp;nbsp;&lt;/DIV&gt;

&lt;P&gt;June 2001&lt;/P&gt;

&lt;H3&gt;Win32 Corner - CreateProcess&lt;/H3&gt;

&lt;H4&gt;Steve Lionel&lt;BR /&gt;
	Compaq Fortran Engineering&lt;/H4&gt;

&lt;P&gt;In our last issue, I showed how to use ShellExecute to open a document or run a program. This prompted a user to ask a question we see often - how to emulate what Developer Studio does when you run a console application by clicking on the Start Without Debugging (CTRL-F5) button. After the program exits, the console window remains and a prompt "Press any key to continue" appears. Only after the user presses a key, does the console window close. Users often ask what "switch" they can use to get this behavior.&lt;/P&gt;

&lt;P&gt;The answer is that this feature is provided by Developer Studio itself in the way it runs the application, so there is no magic option to turn this on for your own programs. But you can add this function to your own applications using the Win32 API routine CreateProcess and a bit of extra code.&lt;/P&gt;

&lt;P&gt;CreateProcess is the fundamental routine for starting a program in a new process. It has many options for how that program is started, including making the console window hidden - see the documentation for details.&lt;/P&gt;

&lt;P&gt;Example anykey.f90 (attached) works by calling its function press_anykey at the beginning of execution. The function does the following:&lt;/P&gt;

&lt;UL&gt;
	&lt;LI&gt;Determines if this the "parent" or the "child". In this example, the determination is made by a simple test of the number of command line arguments, but you'd probably want a more sophisticated test in a real application. If a child, the function returns .FALSE. and the caller continues execution, doing the real work of whatever it is supposed to do.&lt;/LI&gt;
	&lt;LI&gt;If the parent, it gets the full path for the current executable. GetModuleFileName is itself an interesting and useful routine.&lt;/LI&gt;
	&lt;LI&gt;CreateProcess is called to run the same program in a new process, but a -child switch is added to the command line. Note that the ApplicationName argument is specified as NULL - if so, the location is taken from the first command line token. The Boolean value InheritHandles is set to TRUE (note - not .TRUE. which is a different value!) so that the child process uses the same console window as the parent. If we had wanted to specify things such as a window position, or a hidden window, we would have done so in StartupInfo. ProcessInfo returns information about the created process and thread.&lt;/LI&gt;
	&lt;LI&gt;If the CreateProcess succeeds, we wait for the process to complete by using WaitForSingleObject. You can specify a timeout value here if you like.&lt;/LI&gt;
	&lt;LI&gt;After the created process is done with its work, the handles to the thread and process are closed. This is an important step; if omitted, resources will be left dangling.&lt;/LI&gt;
	&lt;LI&gt;A message is displayed and we wait for the user to press a key.&lt;/LI&gt;
	&lt;LI&gt;Last, the function returns .TRUE., which tells the caller to just exit.&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Thu, 08 Dec 2005 02:20:00 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Fortran-Compiler/Visual-Fortran-Newsletter-Articles/m-p/844711#M62680</guid>
      <dc:creator>Steven_L_Intel1</dc:creator>
      <dc:date>2005-12-08T02:20:00Z</dc:date>
    </item>
    <item>
      <title>Doctor Fortran - Taking a new look at FORMAT</title>
      <link>https://community.intel.com/t5/Intel-Fortran-Compiler/Visual-Fortran-Newsletter-Articles/m-p/844712#M62681</link>
      <description>&lt;P&gt;June 2001&lt;/P&gt;
&lt;H3&gt;Doctor Fortran - Something Old, 
                    Something New: Taking a new look at FORMAT&lt;/H3&gt;
                  &lt;H4&gt;Steve Lionel&lt;BR /&gt;


                    Compaq Fortran Engineering&lt;/H4&gt;
                  &lt;P&gt; Most Fortran programmers of a "certain age" don't 
                    give a lot of thought to the FORMAT statement - it's been 
                    in the language "forever", and many of us use the 
                    capabilities that were provided by FORTRAN 77, or perhaps 
                    even FORTRAN IV. But as the Fortran standard has evolved, 
                    formats have too, and the Good Doctor decided it's time to 
                    review what's new in FORMAT since FORTRAN 77. &lt;/P&gt;
                  &lt;P&gt; Zero width for integer and real editing - F95 added a nifty 
                    new feature, the ability to specify a field width of zero 
                    for integer output editing (I, B, O, Z descriptors) and real 
                    output editing (F descriptor.) If zero is specified, "the 
                    processor selects the field width", which typically means 
                    that the width is just enough to display the actual significant 
                    digits. So, for example: &lt;/P&gt;
                  &lt;PRE&gt;WRITE (*,"(A,I0,A)") "ABC",123,"DEF"&lt;/PRE&gt;
                  &lt;P&gt; would write "ABC123DEF". Note that the standard 
                    doesn't exactly say this is what should happen, a processor 
                    (compiler) might "select" one of several preset 
                    widths, but minimal width is the intent of this feature and 
                    should be the widely implemented interpretation. Note that 
                    a zero width is not allowed on input. &lt;/P&gt;
                  &lt;P&gt; G format for any datatype - In Fortran 77, the G edit descriptor 
                    was usable only with REAL, DOUBLE PRECISION and COMPLEX values, 
                    but F90 extended G to INTEGER, LOGICAL and CHARACTER types 
                    as well. For these other types, G operates a lot like list-directed 
                    formatting, in that the corresponding specific edit descriptor 
                    (I, L, A) is used with the width specified (except that for 
                    INTEGER data, the width may not be specified as zero.) &lt;/P&gt;
                  &lt;P&gt; EN and ES - Perhaps of more limited interest than the above 
                    additions, F90 added EN for "engineering notation" 
                    and ES for "scientific notation". These variants 
                    of the E format modify how the fraction and exponent are formatted. 
                    With EN, the significand is always greater than or equal to 
                    1 and less than 1000 (except if zero), and the exponent is 
                    always a multiple of 3, for example, 12345.0 in EN10.3 format 
                    would be "12.345E+03". With ES, the significand 
                    is greater than or equal to 1 and less then 10, and there 
                    is no restriction on the exponent. Our value 12345.0 in ES10.4 
                    format would be "1.2345E+04". For comparison, 
                    the more familiar E11.5 format would display "0.12345E+05". 
                    (Note that the standard would also allow ".12345E+05" 
                    here, but including the leading ze
ro is the most common practice.) 
                  &lt;/P&gt;
                  &lt;P&gt; Fortran 90 also added the widely implemented B, O and Z 
                    edit descriptors. You old-timers out there may not be familiar 
                    with some F77 additions, such as TL and TR. All of these are 
                    described in the Intel
                    Fortran Language Reference manual. Happy formatting! &lt;/P&gt;</description>
      <pubDate>Thu, 08 Dec 2005 02:24:07 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Fortran-Compiler/Visual-Fortran-Newsletter-Articles/m-p/844712#M62681</guid>
      <dc:creator>Steven_L_Intel1</dc:creator>
      <dc:date>2005-12-08T02:24:07Z</dc:date>
    </item>
    <item>
      <title>The Perils of Real Numbers (Part 3)</title>
      <link>https://community.intel.com/t5/Intel-Fortran-Compiler/Visual-Fortran-Newsletter-Articles/m-p/844713#M62682</link>
      <description>&lt;P&gt;September 2001&lt;/P&gt;
&lt;H3&gt;The Perils of Real Numbers (Part 3)&lt;/H3&gt;
                  &lt;H4&gt;Dave Eklund&lt;BR /&gt;


                    Visual Fortran Engineer&lt;/H4&gt;
                  &lt;P&gt; In part 
                    2 we offered the following program:&lt;/P&gt;
                  &lt;PRE&gt;
        integer :: two_24 = 2**24
        do k = -2, 15
        i = two_24 + k
        print 1, k, i, i, float(i), float(i)
1       format(i2,1x,i9,1x,z9,1x,f12.1,1x,b33.32)
        enddo
        end
&lt;/PRE&gt;
                  &lt;P&gt; with output: &lt;/P&gt;
                  &lt;PRE&gt;
integer:    in hex: Real number:     Real in binary:

16777214    FFFFFE   16777214.0  01001011011111111111111111111110 
16777215    FFFFFF   16777215.0  01001011011111111111111111111111
16777216   1000000   16777216.0  01001011100000000000000000000000 
16777217   1000001   16777216.0  01001011100000000000000000000000
16777218   1000002   16777218.0  01001011100000000000000000000001
16777219   1000003   16777220.0  01001011100000000000000000000010
16777220   1000004   16777220.0  01001011100000000000000000000010
16777221   1000005   16777220.0  01001011100000000000000000000010
16777222   1000006   16777222.0  01001011100000000000000000000011
16777223   1000007   16777224.0  01001011100000000000000000000100 
16777224   1000008   16777224.0  01001011100000000000000000000100 
16777225   1000009   16777224.0  01001011100000000000000000000100
16777226   100000A   16777226.0  01001011100000000000000000000101 
16777227   100000B   16777228.0  01001011100000000000000000000110 
16777228   100000C   16777228.0  01001011100000000000000000000110 
16777229   100000D   16777228.0  01001011100000000000000000000110 
16777230   100000E   16777230.0  01001011100000000000000000000111
16777231   100000F   16777232.0  01001011100000000000000000001000
&lt;/PRE&gt;
                  &lt;P&gt;Glance down the Real number column. Notice that the last 
                    few digits are: &lt;/P&gt;
                  &lt;P&gt; 14, 15, 16, 16, 18, 20, 20, 20, 22, 24, 24, 24, 26, 28, 
                    28, 28, 30, 32. Explain why there are THREE values of 24, 
                    then ONE 26, then THREE values of 28, then ONE 30; this pattern 
                    will continue (for a long time). Why not 14, 15, 16, 16, 18, 
                    18, 20, 20, 22, 22, 24, 24, 26, 26, 28, 28, 30, 30 instead?! 
                    Extra credit: what "simple" change can our expert 
                    Fortran developer make to produce the latter sequence instead 
                    of the former? Hint: any of you economists out there ever 
                    heard of Banker's rounding?! &lt;/P&gt;
                  &lt;P&gt; The short answer is that most modern systems default their 
                    "rounding mode" to be "round to even", 
                    otherwise known as Banker's rounding. This rounding may occur 
                    when an operation (like add, divide, etc.) is performed, or 
                    it may occur when the user attempts to display a floating 
                    point value using a particular format, or it may occur when 
                    a text string is converted to an internal floating point number. 
                    In the example given above, certain exact integer values, 
                    like 1677727, are converted to floating point values. Now 
                    1677727 cannot be exactly represented as a real number. The 
                    two closest flo
ating point values are 1677726. and 1677728. 
                    Since 1677727 is EXACTLY half way between these values, we 
                    need to "round to even" (if not half way, the closer 
                    value should be selected). But hey, you say, both 1677726. 
                    and 1677728. are "even"! &lt;/P&gt;
                  &lt;P&gt; Well, no, that's not what is meant by "even". 
                    If we look at the bit patterns for those two floating point 
                    values, we see 01001011100000000000000000000101 and 01001011100000000000000000000110. 
                    The one whose last bit is "1" is odd, and the one 
                    whose last bit is "0" is even! So 1677726. is an 
                    "odd" value and 1677728. is an "even" 
                    value! Exactly the same thing happens when converting 1677729 
                    to real, and it also becomes 1677728., since 1677730. is another 
                    "odd" number. &lt;/P&gt;
                  &lt;P&gt; This is why the pattern of results has xxx20. three times 
                    (it's an even number), then xxx22. once (it's an odd number), 
                    then xxx24. three times (it's even), etc. All of the "even" 
                    numbers are preferred by the "round to even" rule. 
                    Now this is merely the default rounding mode. This mode has 
                    the desirable property that for a randomly selected group 
                    of numbers, half of them should round up and half should round 
                    down. &lt;/P&gt;
                  &lt;P&gt; Again, many modern machines have alternatives, like "chopped", 
                    or "round to plus infinity", "round to minus 
                    infinity", etc. Depending upon the hardware and software 
                    combination, there are different ways to select a non-default 
                    rounding mode. Using the above example program, we can modify 
                    it to be: &lt;/P&gt;
                  &lt;PRE&gt;
        use dflib
        integer :: two_24 = 2**24
        integer(2) control,clearcontrol,newcontrol

        call getcontrolfpqq(control)
        clearcontrol=(control .and. (.not. fpcw$mcw_rc))
        newcontrol=clearcontrol .or. fpcw$chop  ! select chopped 
                                                ! rounding mode
        call setcontrolfpqq(newcontrol)

        do k = -2, 15
        i = two_24 + k
        print 1, k, i, i, float(i), float(i), float(i)
1       format(' 2**24 + ',i2,1x,i9,1x,z9,1x,f12.1,1x,b33.32,1x,z)
        enddo
        end
&lt;/PRE&gt;
                  &lt;P&gt;This latter program will achieve the goal of producing pairs 
                    of values instead of the 3,1,3,1 pattern given by default. 
                    The modification above causes the rounding mode to be "chopped". 
                    
                    Other systems, like Tru64 UNIX, might achieve the same thing 
                    by compiling the original program with the switch -fprm chopped, 
                    with similar result. The point of the example is to show that 
                    the user DOES have control of certain fairly subtle computations 
                    and conversions. Yes, it IS rare that one might need this. 
                    It is important to know that the capability is there; it helps 
                    to understand (and change) "unexpected" re
sults! 
                  &lt;/P&gt;
                  &lt;P&gt; Many of you have undoubtedly heard that one should never 
                    compare two real numbers as being exactly equal. This old 
                    admonition gets thrown around in various flavors, but the 
                    fear is that somehow there is going to be some of that roundoff 
                    "stuff" that causes answers to be slightly different 
                    than what one might expect, so comparisons should always involve 
                    some kind of fudge factor. In languages such as APL, a "comparison 
                    tolerance" was provided to "help out"! Not 
                    very enlightened reasoning, but there is actually some underlying 
                    truth here. To explore this, let's consider the following 
                    "simple" program: &lt;/P&gt;
                  &lt;P&gt; 
                  &lt;PRE&gt;
        N=5
        do i=-N,N
        x=float(i)/N
        print *,'x=',x

! How is the following false!?
        if(sin(x) .eq. sin(x)) print *,' Equal for x=',x

        enddo
        end
&lt;/PRE&gt;
                  &lt;/P&gt;&lt;P&gt; The program computes sin(x) for various values of x, compares 
                    the result to sin(x), and if the two values are equal, prints 
                    that they are equal. If you compile this using /opt:4 (the 
                    default), all the values will get reported as Equal; but at 
                    /opt:0, the ONLY value that reports as Equal is where x is 
                    zero! &lt;/P&gt;
                  &lt;P&gt; It turns out that sin(x) is computed via a call to a routine 
                    (__FIsin) for either optimization level. And the same 80 bit 
                    result is returned. There is no other "computation" 
                    going on. So how, you might ask, are these two values comparing 
                    as "not equal"? And why the difference for the two 
                    optimization levels? &lt;/P&gt;
                  &lt;P&gt; Well, the answer is that in one case __FIsin is called twice, 
                    and the two values are compared. The first value needs to 
                    be moved out of the function return register to make room 
                    for the second call. The store to memory rounds the 80 bit 
                    value to a 32 bit value. The comparison compares the evicted 
                    32 bit value (zero extended) to the new 80 bit value, and 
                    only when the two values are exactly the same (zero) do they 
                    compare as equal. In order for two such values to compare 
                    equal, nearly 50 low order bits must be zero! Is there a (large) 
                    value of N that produces a non-zero value for x where this 
                    happens? [Clearly an extra credit question!] &lt;/P&gt;
                  &lt;P&gt; In the optimized case, only one call is made to __FIsin. 
                    The value happens to be moved out to the floating point stack 
                    instead (a full 80 bit copy), and the comparison is between 
                    two full 80 bit values, which are identical! &lt;/P&gt;
                  &lt;P&gt; I can hear some of you saying that you would use the compiler 
                    switch /fltconsistency (or /Op) to "cure" this problem. 
                    Yes, this happens to cause the code to store both intermediate 
  
                  results (the value of sin(x)) to memory, so that now the comparison 
                    is actually a 32 bit comparison. While this is effective for 
                    this case, the switch can cause many unnecessary stores to 
                    memory for no great reward, and it does NOT guarantee that 
                    all intermediate results are stored WITHIN a statement. For 
                    example, if you have a=b*c/d, a store will NOT happen after 
                    the multiply of b*c (the code is fld; fmul; fdiv). It is a 
                    very useful switch which may help in many cases, but has a 
                    fairly high cost, and is NOT going to cover up for all of 
                    our sloppy coding! &lt;/P&gt;
                  &lt;P&gt; WHENEVER the compiler generates code to store a floating 
                    point value to memory (the above was a very simple case!), 
                    rounding to the appropriate size will occur. This WILL be 
                    different between different compilers, different versions 
                    of the same compiler, and at different optimization levels. 
                    So, please do not check for equality of floating point numbers. 
                    What works today (by accident, as above) may cease to "work" 
                    for the most flimsy of excuses! Even when we are not evaluating 
                    an expression, just moving the value around can cause the 
                    value to change! &lt;/P&gt;
                  &lt;P&gt; The following program is one of my favorite examples of 
                    how an optimizing compiler, in cahoots with a machine with 
                    a wide floating point register, can be just "too darn 
                    clever". It is an example of the kind of problem one 
                    can have when a floating result is NOT stored to memory between 
                    computations. &lt;/P&gt;
                  &lt;P&gt; 
                  &lt;PRE&gt;
        x=1.0

        do i=1,20000
        x=x*.5
        if(x.eq.0)goto 10  ! When X is zero, we are done
d       print *,x
        enddo

10      type *,' Iterated ',i,' times'
        end
&lt;/PRE&gt;
                  &lt;/P&gt;&lt;P&gt; Notice that this is a test.f (NOT test.f90) source, so that 
                    we can use the switch /d_lines to include the "print 
                    *,x" line (or exclude it). I would invite you to compile 
                    and execute the program using all combinations of: &lt;/P&gt;
                  &lt;PRE&gt;
        /opt:0 or /opt:2
        /d_lines (or no switch)
        /real_size:32 or /real_size:64
        /fpe:0 or /fpe:3
&lt;/PRE&gt;
                  &lt;P&gt; First, let me say that this is the kind of loop that one 
                    might execute in order to decide how small a real number can 
                    get without becoming zero. As long as the iteration continues, 
                    X is known to be non-zero. This is one of the ways to help 
                    compute how big the exponent field is for a "new" 
                    machine. This also assumes that sooner or later cutting a 
                    real number in half will make the number become zero! Notice 
                    that each value of X is a power of 2., so there are no rounding 
                    problems. &lt;/P&gt;
                  &lt;P&gt; What is interesting is that at higher optimization leve
ls 
                    (and when NOT printing the value of X), the compiler is delighted 
                    to just leave X in a floating point register, and test for 
                    zero there. Unfortunately, the floating point register is 
                    not the same size as a real (or double precision), so one 
                    can get the very misleading result of 16435 iterations! If 
                    we use /opt:0 (or /d_lines), the compiler is more inclined 
                    to move the result back to memory, giving 150 (or 127 with 
                    /fpe:0). Notice that /fpe:0 causes the first computed denormalized 
                    number to be set to zero, hence giving 23 fewer iterations 
                    for real, and 52 fewer iterations with /real_size:64. &lt;/P&gt;
                  &lt;P&gt; This is why certain numerical packages go to GREAT lengths 
                    when they try to figure out the properties of real numbers 
                    in such a mechanical way! It may be crucial to turn off all 
                    optimizations and still worry about the default settings of 
                    any "peculiar" switches like /fpe: and /real_size:. 
                    Here is another example of when testing for equality may not 
                    do what one might expect!&lt;/P&gt;</description>
      <pubDate>Thu, 08 Dec 2005 02:29:44 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Fortran-Compiler/Visual-Fortran-Newsletter-Articles/m-p/844713#M62682</guid>
      <dc:creator>Steven_L_Intel1</dc:creator>
      <dc:date>2005-12-08T02:29:44Z</dc:date>
    </item>
    <item>
      <title>Calling Visual Fortran from Java JNI</title>
      <link>https://community.intel.com/t5/Intel-Fortran-Compiler/Visual-Fortran-Newsletter-Articles/m-p/844714#M62683</link>
      <description>&lt;P&gt;September 2001&lt;/P&gt;
&lt;H3&gt;Calling Visual Fortran from Java JNI&lt;/H3&gt;
                  &lt;H4&gt;Lorri Menard&lt;BR /&gt;

                    Visual Fortran Engineer&lt;/H4&gt;
                  &lt;P&gt; Recently a customer contacted us because he was having problems 
                    trying to hook a Java JDK GUI to a Fortran DLL. &lt;/P&gt;
                  His starting point was the article "&lt;A href="http://www.math.ucla.edu/~anderson/JAVAclass/JavaInterface/JavaInterface.html" target="_blank"&gt;Putting 
                  a Java Interface on your C,C++ or Fortran Code&lt;/A&gt;" by 
                  C. Anderson. While this article does a good job of describing 
                  the perils of mixed-language processing and describing how to 
                  call C/C++ from Java, its description of calling Fortran is 
                  specific to the UNIX operating system, and the information is 
                  not correct for users of Visual Fortran. 
                  &lt;P&gt;&lt;/P&gt;
                  &lt;P&gt; The purpose of this article is to explain how to call a 
                    routine written using Visual Fortran from a Java application. 
                    In this case, I'm using JDK and its verbs. If you are using 
                    a different Java, you will have to use the analogous verbs 
                    and implementation-specific .h files. &lt;/P&gt;
                  &lt;P&gt; The easiest and most maintainable mechanism to call Visual 
                    Fortran from Java is to use C++ wrappers to do the actual 
                    interface with Java, and then call Fortran from C++. Your 
                    DLL can contain both C++ and Fortran code, therefore you won't 
                    have the complexity of multiple DLLs. There are differences 
                    in the calling standards between Fortran and C++; I'll point 
                    them out in the example programs below. &lt;/P&gt;
                  &lt;P&gt; The Java Native Interface was designed with C++ in mind. 
                    It's supported as .h files, not as Fortran files. Some of 
                    the most important .h files are the jni.h and jni_md.h files 
                    provided by Java. Also important is the program-specific .h 
                    file created by running javah over your Java program. &lt;/P&gt;
                  &lt;P&gt; The C++ wrapper needs to include jni.h and the program-specific 
                    .h file. The file jni.h describes Java structures, and the 
                    program-specific .h file contains prototypes for the external 
                    routines. It is in this file that you can find the name and 
                    signature that Java is expecting for the external routines. 
                  &lt;/P&gt;
                  &lt;P&gt; Let me put a bit of an example here. This is much simplified 
                    from Dr. Anderson's article because this one doesn't actually 
                    do anything. There is a Java class that calls a routine called 
                    "initializeTemperature". Ultimately this will be 
                    implemented in Fortran, however there will be a C++ wrapper 
                    in between Java and Fortran. In the next few pages you will 
                    see an example of the Java code, the generated program-specific 
                    .h file, the C++ wrapper code, and the skeleton of the Fortran 
                    code. &lt;/P&gt;
                  &lt;P&gt;Java code:
                  &lt;PRE&gt;
//
// native method declarations
//
                  
public native void initializeTemperature(double[] Tarray, int 
    m, double d); 
&lt;/PRE&gt;
                  &lt;/P&gt;&lt;P&gt;Program-specific .h file generated by javah:&lt;/P&gt;
                  &lt;PRE&gt;
/* DO NOT EDIT THIS FILE - it is machine generated */
#include &lt;JNI.H&gt;
/* Header for class TempCalcJava */
#ifndef _Included_TempCalcJava
#define _Included_TempCalcJava
#ifdef __cplusplus
extern "C" {
#endif
/*
 * Class: TempCalcJava
 * Method: initializeTemperature
 * Signature: ([DIIDDDD)V
 */
JNIEXPORT void JNICALL Java_TempCalcJava_initializeTemperature
(JNIEnv *, jobject, jdoubleArray, jint, jdouble);
#ifdef __cplusplus
}
#endif
#endif
&lt;/JNI.H&gt;&lt;/PRE&gt;
                  &lt;P&gt; In this example, the original name in Java is prepended 
                    with the string "Java_" and the name of the class. 
                    There are also two arguments prepended to the argument list. 
                    These are a "JNI Environment pointer", and a copy 
                    of the Java object being acted upon. You will need to write 
                    your C++ wrapper to export this same name, and to expect these 
                    two added arguments. &lt;/P&gt;
                  &lt;P&gt; Please note that in this example an array is being passed 
                    out of Java. In Java, arrays are stored much differently than 
                    they are in Fortran; there is a sort of "meta" structure 
                    around them. However, there are Java native callback routines 
                    available to the external routines to get to the actual data. 
                    These are available as methods using the "JNI Environment 
                    pointer" passed in as the first "hidden" argument. 
                    These methods are declared in the jni.h header file. Again, 
                    easily accessible through C++, not easily accessible through 
                    Fortran. &lt;/P&gt;
                  &lt;P&gt; The C++ routine Java_TempCalcJava_initializeTemperature 
                    contains the following code: &lt;/P&gt;
                  &lt;PRE&gt;
extern "C" void initializeTemperature(
    double* Tarray, long m, double d);

JNIEXPORT void JNICALL Java_TempCalcJava_initializeTemperature
(JNIEnv *env, jobject, jdoubleArray Tarray, jint m, jdouble d)
{
     jdouble* tPtr   = env-&amp;gt;GetDoubleArrayElements(Tarray,0);
     initializeTemperature(tPtr, m, d);
     env-&amp;gt;ReleaseDoubleArrayElements(Tarray, tPtr,0);
}
&lt;/PRE&gt;
                  &lt;P&gt; In this code snippet, the GetDoubleArrayElements method 
                    returns a pointer to the first element in an array of doubles. 
                    This can be passed straight through to the Fortran routine 
                    initializeTemperature because that is how Fortran is expecting 
                    an array to be passed. Note that the Fortran routine is declared 
                    to be extern "C" to avoid C++ name mangling. &lt;/P&gt;
                  &lt;P&gt; Finally, the Fortran code. Fortran and C++ use different 
                    default calling standards in argument passing and stack clean 
                    up. In this example I chose to put the "smarts" 
                    about overriding the defaults in the Fortran code. It certainly 
                    could also be done in the C++ code; your choice. There is 
                    a comprehen
sive chapter on Programming with Mixed Languages 
                    in the online Programmer's Guide if you want more information 
                    on what these differences are, and when you need to worry 
                    about them. Bottom line, here I've told Fortran to use the 
                    C calling standard defaults for the routine initializeTemperature, 
                    and to export the external name "_initializeTemperature", 
                    since the C++ code used this mixed-case name. &lt;/P&gt;
                  &lt;PRE&gt;
       subroutine initializeTemperature(T, m, d)

!dec$ attributes c :: initializeTemperature
!DEC$ attributes alias:"_initializeTemperature"::initializeTemperature
       integer m
       real*8 t(m)
       real*8 d
       &lt;DO some="" calculations=""&gt;
       return
       end
&lt;/DO&gt;&lt;/PRE&gt;
                  &lt;P&gt; Finally, I know many of you are thinking "Why do I 
                    have to have C++ in the middle?" And the answer is that 
                    you don't HAVE to, it's just much simpler. If you wanted to 
                    do this in strict Fortran you would have to manually translate 
                    jni.h to Fortran, and it if changed (such as with a newer 
                    version of Java) you'd have to do the translation again. &lt;/P&gt;
                  &lt;P&gt; If you are not ever going to be passing arrays, well, maybe 
                    you can get away without a modified jni.h, and without having 
                    C++ in the middle. (If you remember, we used C++ to access 
                    the native Java routines to get at fields in the array structure.) 
                    You can easily write a Fortran program with a long and ugly 
                    name, and declare two extra, ignored arguments. However, will 
                    it be useful to have a routine that only accepts scalars? 
                    You will have to determine that by your application. I think 
                    that most applications pass around LOTS of data, not a few 
                    discrete numbers. &lt;/P&gt;
                  &lt;P&gt; I still stick by my contention at the beginning of this 
                    article that it is easier and more maintainable to keep the 
                    C++ wrapper and call Fortran from that routine. If Java changes, 
                    it provides a new jni.h and a simple rebuild of your project 
                    incorporates any changes. &lt;/P&gt;
                  &lt;P&gt; Of course, if Java provided a Fortran header file, or maybe 
                    a MODULE file, this whole process would be much easier. &lt;INSERT smiley="" face="" here=""&gt; &lt;/INSERT&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 08 Dec 2005 02:34:30 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Fortran-Compiler/Visual-Fortran-Newsletter-Articles/m-p/844714#M62683</guid>
      <dc:creator>Steven_L_Intel1</dc:creator>
      <dc:date>2005-12-08T02:34:30Z</dc:date>
    </item>
    <item>
      <title>Doctor Fortran - Don't Blow Your Stack!</title>
      <link>https://community.intel.com/t5/Intel-Fortran-Compiler/Visual-Fortran-Newsletter-Articles/m-p/844715#M62684</link>
      <description>&lt;P&gt;September 2001&lt;/P&gt;
&lt;H3&gt;Doctor Fortran - Don't Blow Your Stack!&lt;/H3&gt;
                  &lt;H4&gt;Steve Lionel&lt;BR /&gt;


                    Visual Fortran Engineering&lt;/H4&gt;
                  &lt;P&gt;"Doctor, my stack is overflowing! What does it mean?! 
                    Is there a cure?!" The Doctor frequently sees questions 
                    like these, and he realizes it's time for a general article 
                    on the subject of stack allocation, as well as the other memory 
                    allocation types, static and dynamic. &lt;/P&gt;
                  &lt;H4&gt;Static allocation&lt;/H4&gt;
                  &lt;P&gt;"Everybody's got to be somewhere," the saying goes. 
                    And so it is with your program's data - it has to live somewhere 
                    in memory while it is being referenced (registers are a special 
                    kind of memory we won't get into here.) The compiler, linker 
                    and operating system work together to determine exactly where 
                    in memory a piece of data is to reside. The simplest method 
                    of assigning locations is "static allocation", where 
                    the data is assigned a fixed (static) address by the compiler 
                    and linker in the executable image (EXE). For example, if 
                    variable X is statically allocated at address 4000, it is 
                    always at address 4000 when that EXE is run, no matter what 
                    else is going on in the system. (DLLs can also have static 
                    data - it is allocated at a fixed offset from the base address 
                    where the DLL gets loaded.)&lt;/P&gt;
                  &lt;P&gt;Static allocation is simple from the compiler's perspective 
                    because all that is needed is to create a list of variables 
                    that need allocation, and lay them down in memory one after 
                    the other. A run-time advantage of static allocation is that 
                    it is usually easy and fast to access a fixed address and 
                    statically allocated data can be used from anywhere in the 
                    program. But static allocation has disadvantages too. First, 
                    if you have any reentrant or parallel code, the multiple codestreams 
                    are both trying to use the same data, which may not be wanted. 
                    Second, if you have many routines which need a lot of memory 
                    just while they're executing, the available address space 
                    can fill up quickly (for example, ten routines each of which 
                    declares a 1000x1000 REAL(8) array need a total of 80,000,000 
                    bytes just for those arrays.) And perhaps most important, 
                    with static allocation you must know at compile-time how much 
                    memory you will want.&lt;/P&gt;
                  &lt;P&gt;Up through Fortran 77, the Fortran standard was carefully 
                    written in a way so that static allocation was the only method 
                    needed. Even today, static allocation is the most widely used 
                    method - in Visual Fortran, COMMON blocks and most variables 
                    with the SAVE attribute are allocated statically. (Note that Compaq Visual Fortran, by default, implies SAVE for local routine 
       
             variables unless it can see that the variable is always written 
                    before it is read.) [Intel Visual Fortran does not imply SAVE for local variables - you must specify that with a SAVE statememt or use the /Qsave option if you want that.]&lt;BR /&gt;&lt;/P&gt;
                  &lt;H4&gt;Dynamic allocation&lt;/H4&gt;
                  &lt;P&gt;Dynamic allocation is the complete opposite of static allocation. 
                    With dynamic allocation, the running application must call 
                    a system routine to request a particular amount of memory 
                    (for example, 1000 bytes). The system routine looks to see 
                    if that request size is available in the collection ("heap") 
                    of memory segments it has available. If the request can be 
                    satisfied, a range of memory addresses is marked as used and 
                    the starting address is returned to the program. If the heap 
                    is empty, the operating system expands the virtual address 
                    space of the process to replenish the heap, stopping only 
                    if there is no more virtual memory available. The program 
                    stores the base address in a pointer variable and then can 
                    access the memory. When the program no longer needs the memory, 
                    another system routine is called to "free" it - 
                    return it to the heap so that it can be used again by a future 
                    allocate call. You can think of this as similar to borrowing 
                    money from a bank, and then later paying it back (except that 
                    there's no interest!)&lt;/P&gt;
                  &lt;P&gt;The big advantage of dynamic allocation is that the program 
                    can decide at run-time how much memory to get, making it possible 
                    to create programs that can accommodate problems of any size. 
                    You are limited only by the total amount of virtual memory 
                    available to your process (a little less than 2GB in 32-bit 
                    Windows) and, as long as you keep your pointers separate, 
                    your allocation is separate from others in the application. 
                    However, if your program "forgets" to free the allocated 
                    memory, and no longer has the pointer through which it is 
                    referenced, the allocated memory becomes unusable until the 
                    program exits - a "memory leak". Also, the allocate/free 
                    process can be slow, and accessing data through pointers can 
                    itself reduce run-time performance somewhat.&lt;/P&gt;
                  &lt;P&gt;In Fortran, the ALLOCATE statement performs dynamic allocation, 
                    with DEALLOCATE being the "free" operation. In Visual 
                    Fortran, one can use dynamic allocation in other ways, such 
                    as the C-style malloc/free routines, or by calling Win32 API 
                    routines to allocate memory.&lt;/P&gt;
                  &lt;H4&gt;Stack Allocation&lt;/H4&gt;
                  &lt;P&gt;Stack allocation appears to be the least understood of the 
                    three models. The "stack" is a contiguous section 
                    of memory assigned by the linker. The "stack pointer" 
                    is a 
register (ESP in the X86 architecture) which holds the 
                    current position in the stack. When a program starts executing, 
                    the stack pointer points to the top of the stack (just above 
                    the highest-addressed location in the stack. As routines are 
                    called, the stack pointer is decremented (subtracted from) 
                    to point to a section of the stack that the routine can use 
                    for temporary storage. (The previous value of the stack pointer 
                    is saved.) The routine can call other routines, which in turn 
                    create stack space for themselves by decrementing the stack 
                    pointer. When a routine returns to its caller, it cleans up 
                    by simply restoring the saved stack pointer value.&lt;/P&gt;
                  &lt;P&gt;The stack is an extremely efficient way of creating "scratch 
                    space" for a routine, and the stack plays a prominent 
                    role in the mechanism of calling and passing arguments to 
                    routines. Visual Fortran uses the stack to create space for 
                    automatic arrays (local arrays whose size is based on a routine 
                    argument) and for temporary copies of arrays used in array 
                    expressions or when a contiguous copy of an array section 
                    must be passed to another routine. The problem is, however, 
                    that the total amount of stack space is fixed by the linker, 
                    and if a routine tries to allocate more space than the stack 
                    can hold, the dreaded "stack overflow" error occurs.&lt;/P&gt;
                  &lt;P&gt;On some other operating systems, OpenVMS for example, the 
                    OS can extend the stack as needed, limited only by the total 
                    amount of virtual address space available. On Windows, however, 
                    the stack allocation is determined by the linker and defaults 
                    to a paltry 1MB in the Microsoft linker. You can change the 
                    allocation - for details, see the on-disk documentation topic 
                    "Stack, 
                    linker option setting size of" - but this works only 
                    for executable images (EXEs.) If you are building a DLL, it 
                    doesn't matter what you set the stack size to - it is the 
                    size specified by the EXE that calls your DLL, (for example, 
                    VB.EXE), which is used.&lt;/P&gt;
                  &lt;P&gt;So, what can you do if changing the stack size is not an 
                    option? Reduce your code's use of the stack. Replace automatic 
                    arrays with allocatable arrays and ALLOCATE them to the desired 
                    size at the start of the routine (they will be automatically 
                    deallocated on routine exit unless marked SAVE.) If passing 
                    a noncontiguous array section to another routine, have the 
                    called routine accept it as a deferred-shape array (an explicit 
                    interface is required). Future versions of Visual Fortran may allocate large 
                    temporary values dynamically rather than using the stack, 
                    but for now, being aware of the limits of 
stack allocation 
                    is important.&lt;/P&gt;&lt;P&gt;[Edit January 11, 2008] An update to Intel Visual Fortran 9.1 added the /heap-arrays option which tells the compiler to use the heap (dynamic allocation) for arrays that it would otherwise put on the stack. This can be handy if you can't avoid the stack arrays otherwise. It does add a slight performance penalty for the allocate and deallocate, but applications processing large arrays probably would not notice. See the documentation for more details.]&lt;BR /&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 08 Dec 2005 02:39:08 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Fortran-Compiler/Visual-Fortran-Newsletter-Articles/m-p/844715#M62684</guid>
      <dc:creator>Steven_L_Intel1</dc:creator>
      <dc:date>2005-12-08T02:39:08Z</dc:date>
    </item>
    <item>
      <title>Re: Doctor Fortran - Don't Blow Your Stack!</title>
      <link>https://community.intel.com/t5/Intel-Fortran-Compiler/Visual-Fortran-Newsletter-Articles/m-p/844716#M62685</link>
      <description>Doctor Fortran's shingle is out again - see the new &lt;A href="http://intel.com/software/drfortran"&gt;Doctor Fortran blog&lt;/A&gt;.&lt;BR /&gt;</description>
      <pubDate>Fri, 06 Oct 2006 00:20:47 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Fortran-Compiler/Visual-Fortran-Newsletter-Articles/m-p/844716#M62685</guid>
      <dc:creator>Steven_L_Intel1</dc:creator>
      <dc:date>2006-10-06T00:20:47Z</dc:date>
    </item>
  </channel>
</rss>

