Intel® Fortran Compiler
Build applications that can scale for the future with optimized code designed for Intel® Xeon® and compatible processors.

Integration with VB.net and loss of Double Precision

brown__martin
Beginner
1,771 Views

The problem in short: My VB.net application will only maintain double precision calculations if it first calls an old DLL file that was compiled on the old DEC/Compact Fortran compiler (predecessor of Intel Fortran).

Details: I maintain a large engineering application (model building, FEM analysis, etc.) that was originally developed in Visual Studio VB.net 2003. It relies on several DLLs for critical calculations that were developed originally with DEC/Compact Visual Fortran compiler back in about 2005. We have since migrated the main application to Visual Studio 2013 and 2017, and have recompiled the Fortran DLLs with the Intel Fortran which is integrated into Visual Studio.

All variables that are involved in the critical calculations in both the VB code and the Fortran are declared as Double Precision. (The double precision is absolutely required) If the main VB application calls one of the old DLL files that was compiled years ago with the Compact Visual Fortran then everything works fine, all calculations are done to double precision accuracy, both in the VB part and in the Fortran DLLs. If instead the application calls a DLL that was compiled with the current Intel Fortran compiler, the entire application loses its ability to do proper double precision math, including calculations that are done in the main VB application.

I have tested all of the newly compiled DLLs using small VB test applications to simulate how the DLLs are called from the real application. The DLLs pass all of our tests. When called from the small VB test programs, they return the proper double precision results so it appears that there is nothing inherently wrong with the DLLs.

I believe the Microsoft Visual Studio and/or NetFramework might be causing the problem. It appears as though Visual Studio is making its own decision when to maintain double precision results and when to throw them away.

Again, the bazar thing is that all it takes is call with dummy arguments to one of the old DLLs and then everything is maintained in double precision, even back in VB code. It is as if there was a setting used in the old Fortran compiler that forced everything to stay correct. Maybe stopping the VS/VB parts from changing data types, or stopping some optimization process.

Any ideas would be greatly appreciated !

0 Kudos
23 Replies
IanH
Honored Contributor II
1,560 Views

Perhaps show some code - Fortran and VB

Note there is a difference between double precision (typically an eight byte floating point format when using Intel Fortran) and extended precision (ten bytes).

0 Kudos
FortranFan
Honored Contributor II
1,560 Views

@brown, martin,

Chances are you will need to provide further details of your code and/or all the compiler settings, especially with Intel Fortran and Compaq Visual Fortran, to receive any actionable feedback.

Your original post makes me wonder: do you by any chance have "Any CPU" setting in the .NET side and given the Compaq Visual Fortran based DLLs will be x86 (Win32 aka 32-bit) targets, could it be you have  x64 (64-bit) DLLs with Intel Fortran and some of the data types being used are different in the two target environments in a way that leads to the precision problems you're encountering?  Note I'm not exactly aware of any possibility along such lines, but just bringing it up in case it gives you any clue.

0 Kudos
Steve_Lionel
Honored Contributor III
1,560 Views

Please show us the declaration of the Fortran routine in your DLL along with any !DEC$ directives. Also show the declaration of the function in VB.

0 Kudos
brown__martin
Beginner
1,560 Views

Thank you for the quick responses !  I have attached a few files that might help.  One is a word doc that has the declarations and calls contained in the VB.net code, and the corresponding Fortran routine.

Also attached, just incase it helps, is the vbproj and sln files from the original VB.net 2003 application.  (I think the problem originated here)

I have done some more testing and need to give you some additional information.  I opened up the original VB.net project in VS 2003.  When I remove the calls to the old Fortran DLLs, this old VB.net program is then also not able to do double precision math.  So that means that the problem has been in there for years.  We just never saw it because one of the old Fortran DLLs is always called at startup.  It is as if there is a compiler setting somewhere in the lead VB.net project that shuts down the double precision, and then another setting in the old Fortran DLL project that turns it back on.   (I have no idea if that is actually possible, just an idea) 

Again, I can create brand new VB.net test applications, with the default settings, and it will run the new DLLs and also do proper double precision.  Maybe that tells me that the solution is to start over.  Create a brand new VB.net application with the default settings, bring in all references, dependencies, etc.  Start it as being a small test program, then cut and paste the real code in (over 200,000 lines) and see if it works that way? 

 

0 Kudos
mecej4
Honored Contributor III
1,560 Views

Apart from any of the issues mentioned in the original post, I see inconsistencies in the Fortran code displayed in the .DOCX attachment of #5.

subroutine MATRIXSOLVER(KNEq, KNCoeff, IA, JA, Diag, SG, B, X)
!DEC$ ATTRIBUTES DLLEXPORT, STDCALL ::MATRIXSOLVERSMALL
!DEC$ ATTRIBUTES ALIAS:'MATRIXSOLVERSMALL' :: MATRIXSOLVERSMALL
!DEC$ ATTRIBUTES REFERENCE::KNEq
!DEC$ ATTRIBUTES REFERENCE::KNCoeff
!DEC$ ATTRIBUTES REFERENCE::IA
!DEC$ ATTRIBUTES REFERENCE::JA
!DEC$ ATTRIBUTES REFERENCE::Diag
!DEC$ ATTRIBUTES REFERENCE::SG
!DEC$ ATTRIBUTES REFERENCE::B
!DEC$ ATTRIBUTES REFERENCE::X
IMPLICIT NONE
INTEGER*4 N, KNEQ, KNCOEFF, IA(1), JA(1)
REAL*8 Diag(1), SG(1), B(1), X(1)
CALL SPARS(KNEQ, KNCOEFF, SG, IA, JA)
CALL VSS (KNEQ, KNCOEFF, DIAG, B, SG, IA, JA, X)
end subroutine MATRIXSOLVERS

The compiler would have refused to produce a DLL with this code, since the subroutine name in the first lind does not match that in the last line, and neither matches the exported name (see the line with the DLLEXPORT directive). All these names must be the same, and in addition match the name in the VBA declaration.

0 Kudos
brown__martin
Beginner
1,560 Views

mecej4,

Sorry that was just sloppiness in my last post.  I tried to simplify things.  The real name of that particular DLL was MatrixSolverSmall.  In the post I tried edited it to MatrixSolver just to avoid people asking me where the "Large" one was, but obviously I missed the "s"  In the real code I have all names correct.  That is not the problem, but I am very impressed how carefully you looked at that !

 

0 Kudos
FortranFan
Honored Contributor II
1,560 Views

@brown, martin:

Are there any type conversions carried out on the VB side of the code in .NET such as using Cdbl or Double.Parse methods?

Note some of the statements in your original posts are aspects I've never noticed, "I have tested all of the newly compiled DLLs using small VB test applications to simulate how the DLLs are called from the real application. The DLLs pass all of our tests. When called from the small VB test programs, they return the proper double precision results so it appears that there is nothing inherently wrong with the DLLs.", "I believe the Microsoft Visual Studio and/or NetFramework might be causing the problem. It appears as though Visual Studio is making its own decision when to maintain double precision results and when to throw them away.",, "Again, the bazar thing is that all it takes is call with dummy arguments to one of the old DLLs and then everything is maintained in double precision, even back in VB code. It is as if there was a setting used in the old Fortran compiler that forced everything to stay correct.".

Your best bet may be to make your entire solution available to Intel Support, assuming confidentiality will be important and Intel Support can provide that for you.

0 Kudos
Steve_Lionel
Honored Contributor III
1,560 Views

I think the problem description here is inaccurate. It's not that the ability to do double precision is "lost", but that some particular computation isn't being done in double. If I were still in Intel support and were handed this, I would reply back that the customer needs to identify the particular operation that isn't being done properly and how they know that. 

My advice for now is to spend some time in the debugger and try to find the place where you think something is going wrong. Intel support isn't going to be terribly interested in a vague description such as this, especially when you admit that the problem isn't related to the Fortran DLL at all.

0 Kudos
mecej4
Honored Contributor III
1,560 Views

Again, the bazar thing is that all it takes is call with dummy arguments to one of the old DLLs and then everything is maintained in double precision, even back in VB code. It is as if there was a setting used in the old Fortran compiler that forced everything to stay correct. Maybe stopping the VS/VB parts from changing data types, or stopping some optimization process.

I don't know if you meant "bazaar" or "bizarre", but I wish to comment about the startling notion that an old CVF compiler-built DLL "can force everything to stay correct" with other software that is up to 15 years younger. If that were possible, we would now have an impregnable defense against all viruses and not-yet-introduced bugs.

The CVF-built DLL probably used only X87 instructions, and the X87 instructions could have given slightly more precision (80 bits, with 64 bit mantissa) than the more modern SSE2 (52+1 bit mantissa) instructions. This aspect may be worth investigating, but I suspect that your problems lie elsewhere.

When troubleshooting DLLs, it is often helpful to write a Fortran driver that calls the DLL routines with the same arguments as your VB code does. Run that driver in combination with the and new DLLs, compare results. Move back to the VBA+DLL combination only when the test results are satisfactory .

0 Kudos
brown__martin
Beginner
1,560 Views

Thanks for your comments.  They have helped me.  You inspired me to track down where the double precision "breaks".  It happens when the VB application initializes DirectX graphics.  The attached test routine demonstrates it.  It does a simple calculation, two divided by three, with double precision variables.  The result is 0.666666666666667, with 15 significant figures.  Looks good to me.

Then the initialization of DirectX occurs.  After this the same calculation yields, 0.666666686534882, with only 7 significant figures.

Then the old Compact DLL is called and the ability to get double precision results is restored.

So it looks like I need to learn some more about DirectX and see if I can get it initialized without losing the double precision.  If there are any DirectX experts out there, I am all ears.  It is still a mystery to me how calling the old DLL has such an effect.

 

0 Kudos
andrew_4619
Honored Contributor II
1,560 Views

This is a VB problem not a fortran problem! That said if you run the VB in  a debugger and check the values of A, B and C what do you get? The problem may be in the output of the value not the actual value. The output could be subject to all manner of hidden settings that may have changed maybe?

0 Kudos
IanH
Honored Contributor II
1,560 Views

What happens if you explicitly declare A and B to be double in the VB code?

0 Kudos
brown__martin
Beginner
1,560 Views

Correct, it is not really a Fortran problem.  I did not know that a few days ago.  From researching on DirectX I have found that it is a fact that, "When DirectX is initialized in "managed" code it implicitly switches ALL, that's right all, floating point operations to single precision; even those that are declared as "double" and even in unmanged dlls called from within the managed code."

The fix seems to something with, "CreateFlags.FpuPreserve", which will keep the floating point processor intact.  I have not figured out yet the syntax on how to use it in VB.net.   Anybody know ?

And, yes I will go find a VB.net forum !

0 Kudos
mecej4
Honored Contributor III
1,560 Views

Intel Fortran does provide GETCONTROLFPQQ and SETCONTROLFPQQ in module IFPORT for monitoring and setting the X87 flags. The flag of interest is probably the precision control flag. Be aware that there are conventions regarding preserving and restoring the X87 control word.

Unless you are building the DLL in such a way that it runs with X87 instructions, these comments will not help.

0 Kudos
Steve_Lionel
Honored Contributor III
1,560 Views

ianh wrote:

What happens if you explicitly declare A and B to be double in the VB code?

They are declared as double.

 

0 Kudos
IanH
Honored Contributor II
1,560 Views

Steve Lionel (Ret.) wrote:

Quote:

ianh wrote:

 

What happens if you explicitly declare A and B to be double in the VB code?

 

 

They are declared as double.

 

It may not be pertinent to the original problem, but with the VB syntax `Dim A, B, C As Double`, the"As Double" only applies to C.  A and B are declared, but not explicitly typed. 

 

0 Kudos
FortranFan
Honored Contributor II
1,560 Views

ianh wrote:

.. It may not be pertinent to the original problem, but with the VB syntax `Dim A, B, C As Double`, the"As Double" only applies to C.  A and B are declared, but not explicitly typed. 

OP refers to Visual Basic as part of .NET and I don't think above is correct in this context:

https://docs.microsoft.com/en-us/dotnet/visual-basic/language-reference/statements/dim-statement

0 Kudos
brown__martin
Beginner
1,560 Views

In VB.net it is fine to declare multiple variables in one line, "Dim A, B, C As Double".  If it was an argument list sent to a routine, different story.

I now know what the problem is.  DirectX intentionally sets the "floating-point unit (FPU) to single precision by default when it starts.  I see the solution as one of the following:

1. Figure out how to start DirectX but make it preserve the double precision.  (I have not yet been successful at this)

2. Figure out what settings are needed with the Intel Fortran compiler to reset the FPU state back to double precision.  (This might be the easiest if I knew how to do it.  I will experiment with this today.  Maybe Macej4's comment above will give me a clue.

3. Make a DLL with C to do the initialization of the DirectX.  There are examples of this on the internet that include flags to preserve the double precision.  The problem with this one is that I have never used C.

0 Kudos
Eugene_E_Intel
Employee
1,560 Views

FYI, this seems to be the C call that controls the floating point precision:

https://msdn.microsoft.com/en-us/library/e9b52ceh(v=vs.140).aspx

--Eugene

0 Kudos
brown__martin
Beginner
1,439 Views

mecej4,

That is brilliant.  It looks like exactly what I need.  I studied the documentation on the GETCONTROLFPQQ and SETCONTROLFPQQ functions and tried the examples.  I could get the examples to work, but I could not figure out how to actually get it to change the precision.  The logical expressions using integers is beyond my comprehension.  Can you please give me some more help?

Am I allowed to make connections with people and hire consultants thru this forum?

0 Kudos
Reply