Intel® Fortran Compiler
Build applications that can scale for the future with optimized code designed for Intel® Xeon® and compatible processors.
28456 Discussions

Integration with VB.net and loss of Double Precision

brown__martin
Beginner
2,417 Views

The problem in short: My VB.net application will only maintain double precision calculations if it first calls an old DLL file that was compiled on the old DEC/Compact Fortran compiler (predecessor of Intel Fortran).

Details: I maintain a large engineering application (model building, FEM analysis, etc.) that was originally developed in Visual Studio VB.net 2003. It relies on several DLLs for critical calculations that were developed originally with DEC/Compact Visual Fortran compiler back in about 2005. We have since migrated the main application to Visual Studio 2013 and 2017, and have recompiled the Fortran DLLs with the Intel Fortran which is integrated into Visual Studio.

All variables that are involved in the critical calculations in both the VB code and the Fortran are declared as Double Precision. (The double precision is absolutely required) If the main VB application calls one of the old DLL files that was compiled years ago with the Compact Visual Fortran then everything works fine, all calculations are done to double precision accuracy, both in the VB part and in the Fortran DLLs. If instead the application calls a DLL that was compiled with the current Intel Fortran compiler, the entire application loses its ability to do proper double precision math, including calculations that are done in the main VB application.

I have tested all of the newly compiled DLLs using small VB test applications to simulate how the DLLs are called from the real application. The DLLs pass all of our tests. When called from the small VB test programs, they return the proper double precision results so it appears that there is nothing inherently wrong with the DLLs.

I believe the Microsoft Visual Studio and/or NetFramework might be causing the problem. It appears as though Visual Studio is making its own decision when to maintain double precision results and when to throw them away.

Again, the bazar thing is that all it takes is call with dummy arguments to one of the old DLLs and then everything is maintained in double precision, even back in VB code. It is as if there was a setting used in the old Fortran compiler that forced everything to stay correct. Maybe stopping the VS/VB parts from changing data types, or stopping some optimization process.

Any ideas would be greatly appreciated !

0 Kudos
23 Replies
mecej4
Honored Contributor III
363 Views

As you can see from the X87 documentation (e.g., http://www.website.masmforum.com/tutorials/fptute/fpuchap1.htm ) bits 9 and 8 of the 16-bit control word are the PC (precision control) bits. You may use bitwise AND and OR operations, or use the IBSET() Fortran function, to change just those bits to 11 (for 64-bit precision) or 10 (for 52-bit precision). You can do these operations as illustrated in the GETCONTROLFPQQ example.

In summary, what you have to do in the Fortran DLL is (i) retrieve and save the current control word, (ii) change bits 9 and 8 as desired, leaving the other bits unchanged, (iii) run the calculations in the Fortran DLL (iv) restore the control word and (v) return from the DLL to VB.

This may be a good juncture for updating the code to work with SSE2 instead of X87.

0 Kudos
brown__martin
Beginner
363 Views

mecej4,

I got it working !  It turned out to be simple, I just couldn't see the forest thru the trees before.

Before I start DirectX, I call a Fortran DLL which calls GETCONTROLFPQQ, and passes the result back to the VB.net application.  Then the VB.net app starts DirectX.  DirectX changes the settings on the FPU.  So I just call the Fortran DLL again, passing back the control word, and this time have it call SETCONTROLFPQQ to set the FPU back to its original default settings.  Easy !  DirectX works fine with double precision so everything is good.  Graphics is fine and the analysis converges again.

Thank you for your help and patience.  I never would have figured it out without your help.  You are a rock star.

(Now if Intel could make routine that simply set the FPU to default settings in one call, I would really be happy.)

0 Kudos
mecej4
Honored Contributor III
363 Views

brown, martin wrote:
Now if Intel could make routine that simply set the FPU to default settings in one call, I would really be happy.

The first call to GETCONTROLFPQQ would give you the same result every time that you run your application; simply print/display the control word value that it returns, and use that constant henceforth in a single call to SETCONTROLFPQQ to undo the unwanted change made by DirectX .

The forensic query to GETCONTROLFPQQ has done its job and need not be repeated.

0 Kudos
Reply