<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Change in floating point rounding between Versions 11 and 12 of in Intel® Fortran Compiler</title>
    <link>https://community.intel.com/t5/Intel-Fortran-Compiler/Change-in-floating-point-rounding-between-Versions-11-and-12-of/m-p/810072#M42233</link>
    <description>Thank you all for the help and advice. It looks like I have to live with the fact the inconsistent adoption of compiler versions on our project will result in minor changes in results.&lt;BR /&gt;&lt;BR /&gt;By the way, using compiler version 12.1, both the option /fp:source and /fp:strict result in the compiler using the more accurate cbrt call. An interesting finding though was that if the compiler could pre-compute a cube-root (such as the situation a**(1./3.), where a was defined as a parameter) it would use the less accurate powf if /fp:source was used.</description>
    <pubDate>Wed, 15 Feb 2012 16:30:37 GMT</pubDate>
    <dc:creator>Michael_D_11</dc:creator>
    <dc:date>2012-02-15T16:30:37Z</dc:date>
    <item>
      <title>Change in floating point rounding between Versions 11 and 12 of Fortran compiler</title>
      <link>https://community.intel.com/t5/Intel-Fortran-Compiler/Change-in-floating-point-rounding-between-Versions-11-and-12-of/m-p/810063#M42224</link>
      <description>I have recently noted a rather minor discrepancy in a calculation in one of our codes. In the code we are calculating the cube root (by exponentiation to the 1./3.) of a number, 1500. Between compiler version 11.1.065 and 12.1.0.233 the result of this calculation has changed from 11.44714355 (0x41372780) to 11.44714260 (0x4137277F), or a change in the last bit of the binary mantissa. The later value is clearly the more precise binary representation, but the difference in results using different compilers (with same floating point settings) is leading to noticeable difference in model predictions.&lt;BR /&gt;&lt;BR /&gt;Was a change made in how exponentiation is handled between the two compilers? Was intermediate rounding changed (hence the 1/3 exponent is different)?&lt;BR /&gt;</description>
      <pubDate>Tue, 14 Feb 2012 16:11:51 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Fortran-Compiler/Change-in-floating-point-rounding-between-Versions-11-and-12-of/m-p/810063#M42224</guid>
      <dc:creator>Michael_D_11</dc:creator>
      <dc:date>2012-02-14T16:11:51Z</dc:date>
    </item>
    <item>
      <title>Change in floating point rounding between Versions 11 and 12 of</title>
      <link>https://community.intel.com/t5/Intel-Fortran-Compiler/Change-in-floating-point-rounding-between-Versions-11-and-12-of/m-p/810064#M42225</link>
      <description>The ultimate goal of the math library is to produce the "correctly rounded infinite precision result". We are constantly looking for places we can improve results where this goal is not met. You evidently found one where we made an improvement.&lt;BR /&gt;&lt;BR /&gt;In general, there are many factors that can lead to small differences in floating point results. Some are as simple as math library improvements, but others can be more subtle, such as rearranging operations for optimization, use of vectorization, etc. If these cause "noticeable differences" in your application's results, it is perhaps using an unstable algorithm or is peculiarly sensitive to last-bit differences. It's something you have to expect when changing anything about the environment, including different compiler versions or optimization option changes.&lt;BR /&gt;&lt;BR /&gt;There is no guarantee of bit-for-bit sameness of floating point computations. I will also comment that as you are using single precision, you should not expect more than 7 decimal significant digits. You're reporting a change in the 8th decimal digit. Perhaps you will want to do sensitive calculations in double precision.&lt;BR /&gt;</description>
      <pubDate>Tue, 14 Feb 2012 16:20:10 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Fortran-Compiler/Change-in-floating-point-rounding-between-Versions-11-and-12-of/m-p/810064#M42225</guid>
      <dc:creator>Steven_L_Intel1</dc:creator>
      <dc:date>2012-02-14T16:20:10Z</dc:date>
    </item>
    <item>
      <title>Change in floating point rounding between Versions 11 and 12 of</title>
      <link>https://community.intel.com/t5/Intel-Fortran-Compiler/Change-in-floating-point-rounding-between-Versions-11-and-12-of/m-p/810065#M42226</link>
      <description>I know I should not expect more than 7 decimal digits, but the different compilers are giving different binary answers (by 1 bit) and the values I provided in my initial post are the "exact" decimal representations of the binary answers. Yes, I know i have a problem with sensitivity in downstream code, but that isn't a problem I can readily address at this time. I was hoping some combination of setting could result in different versions of the Intel fortran compiler (v11 and v12) yielding the same result for a given calculation. It appears from your answer that this is unlikely to be achievable.&lt;BR /&gt;&lt;BR /&gt;To me it appears that between v11 and v12 of the compiler some change was made to intermediate rounding such that the following code gives answers that differ by one binary bit.&lt;BR /&gt;&lt;BR /&gt;[bash]      real b
      real ans_b

      b= 1500.
      ans_b= b**(1./3.)[/bash] I understand the need for higher precision for sensitive calculations, but I guess I naively assumed that a rather straight-forward calculation, one with no possibilities for associative or distributive reordering, would give consistent, if inexact, results.</description>
      <pubDate>Tue, 14 Feb 2012 16:57:41 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Fortran-Compiler/Change-in-floating-point-rounding-between-Versions-11-and-12-of/m-p/810065#M42226</guid>
      <dc:creator>Michael_D_11</dc:creator>
      <dc:date>2012-02-14T16:57:41Z</dc:date>
    </item>
    <item>
      <title>Change in floating point rounding between Versions 11 and 12 of</title>
      <link>https://community.intel.com/t5/Intel-Fortran-Compiler/Change-in-floating-point-rounding-between-Versions-11-and-12-of/m-p/810066#M42227</link>
      <description>It's the exponentiation operator that became more accurate.</description>
      <pubDate>Tue, 14 Feb 2012 18:04:27 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Fortran-Compiler/Change-in-floating-point-rounding-between-Versions-11-and-12-of/m-p/810066#M42227</guid>
      <dc:creator>Steven_L_Intel1</dc:creator>
      <dc:date>2012-02-14T18:04:27Z</dc:date>
    </item>
    <item>
      <title>Change in floating point rounding between Versions 11 and 12 of</title>
      <link>https://community.intel.com/t5/Intel-Fortran-Compiler/Change-in-floating-point-rounding-between-Versions-11-and-12-of/m-p/810067#M42228</link>
      <description>Thank you for the prompt reply Steve. One last question; was this improvement in accuracy done in the compiler or in the run-time math libraries? From what I can tell it seems to be in the compiler as I don't notice difference going from the version 11 to the version 12 library dlls.&lt;BR /&gt;</description>
      <pubDate>Tue, 14 Feb 2012 18:26:12 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Fortran-Compiler/Change-in-floating-point-rounding-between-Versions-11-and-12-of/m-p/810067#M42228</guid>
      <dc:creator>Michael_D_11</dc:creator>
      <dc:date>2012-02-14T18:26:12Z</dc:date>
    </item>
    <item>
      <title>Change in floating point rounding between Versions 11 and 12 of</title>
      <link>https://community.intel.com/t5/Intel-Fortran-Compiler/Change-in-floating-point-rounding-between-Versions-11-and-12-of/m-p/810068#M42229</link>
      <description>The operation is done in the run-time library. The unoptimized version in both cases calls _powf, while the optimized version calls __libm_sse2_powf.</description>
      <pubDate>Tue, 14 Feb 2012 20:09:47 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Fortran-Compiler/Change-in-floating-point-rounding-between-Versions-11-and-12-of/m-p/810068#M42229</guid>
      <dc:creator>Steven_L_Intel1</dc:creator>
      <dc:date>2012-02-14T20:09:47Z</dc:date>
    </item>
    <item>
      <title>Change in floating point rounding between Versions 11 and 12 of</title>
      <link>https://community.intel.com/t5/Intel-Fortran-Compiler/Change-in-floating-point-rounding-between-Versions-11-and-12-of/m-p/810069#M42230</link>
      <description>Steve,&lt;BR /&gt;&lt;BR /&gt;I am not using the optimized sse2 functions. Using dependancy walker it seems the Version 12.1 compiler is using the the cbrtf function (in libmmd.dll) while the Version 11 compiler is using _powf. Would this explain the differences? If so, is it possible to set a compiler setting to force one implementation over the other?</description>
      <pubDate>Wed, 15 Feb 2012 14:58:02 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Fortran-Compiler/Change-in-floating-point-rounding-between-Versions-11-and-12-of/m-p/810069#M42230</guid>
      <dc:creator>Michael_D_11</dc:creator>
      <dc:date>2012-02-15T14:58:02Z</dc:date>
    </item>
    <item>
      <title>Change in floating point rounding between Versions 11 and 12 of</title>
      <link>https://community.intel.com/t5/Intel-Fortran-Compiler/Change-in-floating-point-rounding-between-Versions-11-and-12-of/m-p/810070#M42231</link>
      <description>Strange - I could have sworn that I saw it use something called libm_sse2_powf. Anyway, I see that by default it calls libm_sse2_cbtrf. No, you can't force it to call _powf.&lt;BR /&gt;&lt;BR /&gt;I understand the pain it causes when floating point results change, even when the new results are better. But that's the reality of doing floating point computations, and expecting bit-for-bit sameness when the environment changes is unrealistic.</description>
      <pubDate>Wed, 15 Feb 2012 15:04:25 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Fortran-Compiler/Change-in-floating-point-rounding-between-Versions-11-and-12-of/m-p/810070#M42231</guid>
      <dc:creator>Steven_L_Intel1</dc:creator>
      <dc:date>2012-02-15T15:04:25Z</dc:date>
    </item>
    <item>
      <title>Change in floating point rounding between Versions 11 and 12 of</title>
      <link>https://community.intel.com/t5/Intel-Fortran-Compiler/Change-in-floating-point-rounding-between-Versions-11-and-12-of/m-p/810071#M42232</link>
      <description>In a case I worked on where the compiler recognized opportunity to substitute svml cbrt(), -fp-model source would prevent that substitution. Also, -imf-arch-consistency=true is intended to switch math library calls to a version of the library which minimizes architecture dependencies rather than emphasizing speed on specific architectures.&lt;BR /&gt;In the example you presented, I would expect certain compilers to make the most accurate possible evaluation at compile time, so I'm reluctant to assume such an example represents behavior of a practical application.</description>
      <pubDate>Wed, 15 Feb 2012 15:41:20 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Fortran-Compiler/Change-in-floating-point-rounding-between-Versions-11-and-12-of/m-p/810071#M42232</guid>
      <dc:creator>TimP</dc:creator>
      <dc:date>2012-02-15T15:41:20Z</dc:date>
    </item>
    <item>
      <title>Change in floating point rounding between Versions 11 and 12 of</title>
      <link>https://community.intel.com/t5/Intel-Fortran-Compiler/Change-in-floating-point-rounding-between-Versions-11-and-12-of/m-p/810072#M42233</link>
      <description>Thank you all for the help and advice. It looks like I have to live with the fact the inconsistent adoption of compiler versions on our project will result in minor changes in results.&lt;BR /&gt;&lt;BR /&gt;By the way, using compiler version 12.1, both the option /fp:source and /fp:strict result in the compiler using the more accurate cbrt call. An interesting finding though was that if the compiler could pre-compute a cube-root (such as the situation a**(1./3.), where a was defined as a parameter) it would use the less accurate powf if /fp:source was used.</description>
      <pubDate>Wed, 15 Feb 2012 16:30:37 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Fortran-Compiler/Change-in-floating-point-rounding-between-Versions-11-and-12-of/m-p/810072#M42233</guid>
      <dc:creator>Michael_D_11</dc:creator>
      <dc:date>2012-02-15T16:30:37Z</dc:date>
    </item>
    <item>
      <title>Change in floating point rounding between Versions 11 and 12 of</title>
      <link>https://community.intel.com/t5/Intel-Fortran-Compiler/Change-in-floating-point-rounding-between-Versions-11-and-12-of/m-p/810073#M42234</link>
      <description>&lt;DIV id="tiny_quote"&gt;&lt;DIV style="margin-left: 2px; margin-right: 2px;"&gt;Quoting &lt;A jquery1329357864890="58" rel="/en-us/services/profile/quick_profile.php?is_paid=&amp;amp;user_id=419188" href="https://community.intel.com/en-us/profile/419188/" class="basic"&gt;michael.t.donovansaic.com&lt;/A&gt;&lt;/DIV&gt;&lt;DIV style="background-color: #e5e5e5; margin-left: 2px; margin-right: 2px; border: 1px inset; padding: 5px;"&gt;&lt;I&gt;Thank you all for the help and advice. It looks like &lt;STRONG&gt;&lt;SPAN style="text-decoration: underline;"&gt;I have to live with the fact the inconsistent adoption of compiler versions&lt;/SPAN&gt;&lt;/STRONG&gt; on our project will result in minor changes in results.&lt;BR /&gt;...&lt;/I&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;P&gt;&lt;BR /&gt;Hi,&lt;BR /&gt;&lt;BR /&gt;I wouldn't give up until I check a&lt;STRONG&gt;Floating Point Unit&lt;/STRONG&gt;'s ( &lt;STRONG&gt;FPU&lt;/STRONG&gt; )&lt;SPAN style="text-decoration: underline;"&gt;Control Word&lt;/SPAN&gt; in both cases.&lt;BR /&gt;&lt;BR /&gt;Could you call a '&lt;STRONG&gt;_control87&lt;/STRONG&gt;' CRT-function from the &lt;STRONG&gt;IVF&lt;/STRONG&gt;? For example, in C/C++ it is called like:&lt;BR /&gt;&lt;BR /&gt; ...&lt;BR /&gt;UINT uiControlWordx87 =&lt;STRONG&gt;_control87&lt;/STRONG&gt;( _PC_53, _MCW_PC );&lt;BR /&gt; ...&lt;BR /&gt;&lt;BR /&gt;If in both cases &lt;STRONG&gt;FPU&lt;/STRONG&gt;'s &lt;SPAN style="text-decoration: underline;"&gt;Control Words&lt;/SPAN&gt; are different than &lt;STRONG&gt;FPU&lt;/STRONG&gt;sinitialized differently. I could assume in that&lt;BR /&gt;case that a change in a&lt;STRONG&gt;Rounding Control&lt;/STRONG&gt;was made andpossiblyrelated to &lt;STRONG&gt;_RC_NEAR&lt;/STRONG&gt;, &lt;STRONG&gt;_RC_CHOP&lt;/STRONG&gt;,&lt;BR /&gt;&lt;STRONG&gt;_RC_DOWN&lt;/STRONG&gt; or&lt;STRONG&gt;_RC_UP&lt;/STRONG&gt; constants.&lt;BR /&gt;&lt;BR /&gt;Best regards,&lt;BR /&gt;Sergey&lt;/P&gt;</description>
      <pubDate>Thu, 16 Feb 2012 02:41:54 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Fortran-Compiler/Change-in-floating-point-rounding-between-Versions-11-and-12-of/m-p/810073#M42234</guid>
      <dc:creator>SergeyKostrov</dc:creator>
      <dc:date>2012-02-16T02:41:54Z</dc:date>
    </item>
  </channel>
</rss>

