Intel® Fortran Compiler
Build applications that can scale for the future with optimized code designed for Intel® Xeon® and compatible processors.
29487 Discussions

Debug DLL from Release application, and vice versa

eos_pengwern
Beginner
6,856 Views

I have a C++ application which calls a Fortran DLL. It works flawlessly (though slowly) when I run in Debug mode with all runtime error-checks switched on, but crashes each time I try to run it in Release mode. Although the stack trace information in given by Visual Studio when running in Release mode is not fully reliable, it indicates that the halt took place while running the DLL code.

To try to isolate the problem, I have been trying to run my application by either calling the Debug version of the DLL from the Release version of the C++ main application, or vice versa. However, all attemps to do either of these have ended in an error "R6034 - An application has made an attempt to load the C runtime library incorrectly".

My background reading so far has indicated that the usual cause of such an error is the lack of a manifest file, but I have ensured that I have the "Generate manifest file" option set to "yes" in the linker menu. I have also tried making sure that both the C++ application and the Fortran DLL link against the same runtime libraries (usually "Multithreaded debug DLL").

I'm running IVF V11.1.051 in Visual Studio 2008 on 32-bit Vista. Any ideas about where I should look next would be greatly appreciated.

Thanks,

Stephen.

0 Kudos
21 Replies
Steven_L_Intel1
Employee
5,809 Views
Are you running this on the same system where you built it?

My suggestion is to not bother with the "debug DLL" library - this is useful mainly for C/C++ code. It has no effect on your ability to debug your own DLL.

How does it "crash"? The Release mode enables optimizations, and if your code is incorrect, this can cause problems. (A compiler error can't be ruled out either, but is less likely.)
0 Kudos
eos_pengwern
Beginner
5,809 Views
Yes, at this point I'm doing everything within Visual Studio on the development computer.

When the Release code fails, it greys out the application's main window and posts a standard " has stopped working" message. On returning to Visual Studio, there is a message box stating "Stack overflow" and a single entry in the call stack giving the name of the Fortran DLL, but no further information. The failure occurs in a part of the code where there arefour C++ threads running, each one of which makes frequent calls (~20 times per second) to the Fortran DLL, so it's hard to narrow the problem down. That's why I'd like at least to begin by separating out the C++ and Fortran parts of the application if I can.

As this is a video-processing application making use of large arrays, the most common cause of stack overflows I've seen during development is when a temporary array defined on the stack is given an erroneous dimension, for example because the variable used to determine the dimension is not properly defined. I'd expect this sort of error ot be caught when running in debug mode, however, so I can't see why there should be such a difference in behaviours between the two modes.

Stephen.
0 Kudos
Steven_L_Intel1
Employee
5,809 Views
Ah, ok. It is perhaps more likely that the stack overflow is an indirect result of optimization. Try compiling your DLL with /heap-arrays and see if that helps. You could also increase the stack reserve size of the C++ application in its linker properties.
0 Kudos
eos_pengwern
Beginner
5,809 Views
Yes indeed, with /heap-arrays0 it no longer crashes.

I guess it's time for me to have a good look around to see where I'm putting biggish arrays on the stack; it may be best if I keep the stack for one-dimensional arrays and put anything bigger than this on the heap anyway.

Thanks,
Stephen.
0 Kudos
Steven_L_Intel1
Employee
5,809 Views
It could also be temporaries created with array expressions. You should be able to identify where in the code it is getting the error.
0 Kudos
eos_pengwern
Beginner
5,809 Views
Meanwhile, I've also noticed a crash due to an "Access violation" when running the Release version of another part of the code, which again runs flawlessly in Debug mode. I still have the /heap-arrays0 option set, so this probably isn't related to the temporary arrays. Is there a compiler option I can use to troubleshoot this, or do I need to resort to writing debug information to a file as the application runs?

Stephen.
0 Kudos
Steven_L_Intel1
Employee
5,809 Views
If you build the Debug version with optimization on, does it still fail? Try turnning off the bounds checking to come closer to what the Release build does.
0 Kudos
eos_pengwern
Beginner
5,809 Views
Built in debug mode with optimization on and bounds checking off, it still runs flawlessly.

Stephen.
0 Kudos
Steven_L_Intel1
Employee
5,809 Views
Start changing the options under a Debug configuration to more closely match the Release, leaving debug symbol information on. It is possible that just having debug on changes the code so that the error is masked. Can you tell where in the DLL the access violation is happening? Try building with "minimal" debug information.
0 Kudos
eos_pengwern
Beginner
5,809 Views

It looks like there have been two separate issues here...

Some very strange things happened when I started changing the Debug configuration, and eventually I decided to delete my Visual Studio project and create a brand new one. With the brand new project, the "access violation" problem went away.

I've tracked the "stack overflow" problem to a routine where the program does some bit-manipulations to turn an array received as C unsigned chars into a false-colour 32-bit ARGB image. For example, in a loop where 'nx' and 'ny' represent the dimensions of the image:

[bash]        character(c_char), intent(in), dimension(:,:) :: uchar_image
        integer(4), intent(out), dimension(:,:) :: ARGB_image   

        do j=1,ny
            do i=1,nx
                ! First we transfer the C unsigned char bitwise
                ! into the 8 least significant bits of a conventional 
                ! Fortran 32-bit integer
                ARGB_image(i,j) = transfer(uchar_image(i,j), nx) 
                                  ! Recall that all the second argument does 
                                  ! is to provide the container type

		.  (other lines commented out)

		.
            end do
        end do[/bash]


... will cause a stack overflow in Release mode despite working fine in Debug mode.

I can't actually see what gets put on the stack here, nor why using /heap-arrays0 should make any difference, as the arrays uchar_image and ARGB_Image are both allocated in the C++ application and only handled elementally in the Fortran routine.

Stephen.

0 Kudos
Steven_L_Intel1
Employee
5,809 Views
How do you know that the stack overflow occurred right at that point?
0 Kudos
eos_pengwern
Beginner
5,809 Views
If I commented out the line containing the TRANSFER function, there was no stack overflow.
0 Kudos
Steven_L_Intel1
Employee
5,809 Views
Do you get the error the first time through the loop or only after some number of iterations?
0 Kudos
eos_pengwern
Beginner
5,809 Views

Actually it doesn't fail on the first loop, but only after several thousand pixels have been processed.

Stephen.

0 Kudos
eos_pengwern
Beginner
5,809 Views
I should have added:

...but it's definitely at the "transfer" statement that it fails.
0 Kudos
Steven_L_Intel1
Employee
5,809 Views
Ok, I can see what is happening here. The compiler is allocating a stack temp inside the loop, but failing to "pop" it at the end of the statement. I will take this up with the developers.

Let me suggest the following as an alternative to the TRANSFER - I assume you realize that with TRANSFER, the upper 24 bits of the result are undefined and it is the necessity of constructing this value that triggers the creation of the stack temp.

ARGB_image(i,j) = ZEXT(ICHAR(uchar_image(i,j)))

This will move the 8 character bits into ARGB_IMAGE and fill the rest with zero. The generated code for this is also much better. ZEXT is an extension - you could also use:

ARGB_image(i,j) = IBITS(ICHAR(uchar_image(i,j)),0,8)

though this might be a bit slower.

Lastly, you want CHARACTER(KIND=C_CHAR) in the declaration of uchar_image. What you have works by coincidence since C_CHAR is 1 in our implementation, but that's not what you mean.
0 Kudos
Steven_L_Intel1
Employee
5,809 Views
Well, now I'm not so sure about the stack temp. I constructed a runnable example based on the fragment you showed but the stack temp does get cleaned up each time through the loop.

Can you post a complete subroutine that shows the problem?
0 Kudos
eos_pengwern
Beginner
5,809 Views

Hi Steve,

By following your advice I seem to have solved the problem, but for the record here is the full routine in its two versions - the original which gave the stack overflow at the first "transfer" when in Release mode, and the new version I wrote today which, besides being much more elegant and almost certainly faster, runs in Release mode without error. The routine takes a monochrome image of a laser beam profile and 'colours it in' according to the separately-provided wavelength. For testing purposes the "RGB_from_Wavelength" function could be replaced by anything that returns real numbers in the range (0, 1) for the variables 'red', 'green' and 'blue'.

Here's the old, bad version:

[bash]    subroutine ConvertFormats_Char(uchar_image, wavelength, ARGB_image)    
          
        use iso_c_binding
        
        character(c_char), intent(in), dimension(:,:) :: uchar_image
        real(kind(1d0)), intent(in) :: wavelength
        integer(4), intent(out), dimension(:,:) :: ARGB_image     
        
        integer :: i, j, nx, ny
        real(kind(1d0)) :: red, green, blue   
        
        ! Begin by finding the RGB components corresponding to the current wavelength:
        call RGB_from_Wavelength(wavelength, red, green, blue)
        
        nx = size(ARGB_image, 1) 
        ny = size(ARGB_image, 2)
        !DEC$ LOOP COUNT (800)
        do j=1,ny
            !DEC$ LOOP COUNT (800)
             do i=1,nx
                ! First we transfer the C unsigned char bitwise into the 8 least 
                ! bits of a conventional Fortran 32-bit integer
                ARGB_image(i,j) = transfer(uchar_image(i,j), nx) ! Recall that all the 
                                                                 ! second argument does 
                                                                 ! is to provide the 
                                                                 ! container type
                ARGB_image(i,j) = iand(ARGB_image(i,j), int(Z'000000FF',4))  
                                  ! Necessary because the standard doesn't specify
                                  ! what 'transfer' puts into the leftmost bits.

                ! Now we calculate the red component, and shift it eight bits to the left:
                ARGB_image(i,j) = int(real(ARGB_image(i,j), kind(1d0)) * red, 4)
                ARGB_image(i,j) = ishft(ARGB_image(i,j), 8)                
                
                ! We then add the green component, and shift a further eight bits to the
                ! left; this looks necessarily ugly because we're doing the the 'transfer
                ! & iand' inline, to avoid creating any new variables:
                ARGB_image(i,j) = ior(ARGB_image(i,j),                                 &
                                      int(real(iand(transfer(uchar_image(i,j), nx),    &
                                                    int(Z'000000FF',4)), kind(1d0))    &
                                                                        * green, 4))
                ARGB_image(i,j) = ishft(ARGB_image(i,j), 8)
        
                ! Now we add the blue component
                ARGB_image(i,j) = ior(ARGB_image(i,j),                                 &
                                      int(real(iand(transfer(uchar_image(i,j), nx),    &
                                                    int(Z'000000FF',4)), kind(1d0))    &
                                                                        * blue, 4))  
      
                ! Now we need to set the most significant byte - the 'A' - to 0xFF:
                ARGB_image(i,j) = ior(ARGB_image(i,j), int(Z'FF000000',4))
            end do
        end do
        
    end subroutine ConvertFormats_Char[/bash]


... and here's the new, good version:

[bash]    subroutine ConvertFormats_Char(uchar_image, wavelength, ARGB_image)    
          
        use iso_c_binding
        
        character(kind=c_char), intent(in), dimension(:,:) :: uchar_image
        real(kind(1d0)), intent(in) :: wavelength
        integer(4), intent(out), dimension(:,:) :: ARGB_image     
        
        integer :: i, j, nx, ny
        real(kind(1d0)) :: charval, red, green, blue   
       
        ! Begin by finding the RGB components corresponding to the current wavelength:
        call RGB_from_Wavelength(wavelength, red, green, blue)
        
        nx = size(ARGB_image, 1) 
        ny = size(ARGB_image, 2)
        !DEC$ LOOP COUNT (800)
        do j=1,ny
            !DEC$ LOOP COUNT (800)
            do i=1,nx
            
                ! First we transfer the C unsigned char bitwise into the 8 least 
                ! bits of a conventional Fortran 32-bit integer, then convert it to a double:
                charval = real(zext(ichar(uchar_image(i,j))), kind(1d0))

                ! Now we calculate the red component, and shift it eight bits to the left:
                ARGB_image(i,j) = int(charval * red, 4)
                ARGB_image(i,j) = ishft(ARGB_image(i,j), 8)                
                        
                ! We then add the green component, and shift a further eight bits to the
                ! left;                
                ARGB_image(i,j) = ior(ARGB_image(i,j), int(charval * green, 4))
                ARGB_image(i,j) = ishft(ARGB_image(i,j), 8)
                
                ! Now we add the blue component
                ARGB_image(i,j) = ior(ARGB_image(i,j), int(charval * blue, 4))  
   
                ! Now we need to set the most significant byte - the 'A' - to 0xFF:
                ARGB_image(i,j) = ior(ARGB_image(i,j), int(Z'FF000000',4))

            end do
        end do
        
    end subroutine ConvertFormats_Char[/bash]


Thank you for your help with this.
Stephen.

0 Kudos
Steven_L_Intel1
Employee
5,809 Views
Thanks for the code, and I'm glad to hear that you have it working now.
0 Kudos
anishtain4
Beginner
5,672 Views
I'm trying to use /heap-arrays but it says "insufficient virtual memory", I used /heap-arrays:10240 but I still get the same error. what's the deal?
0 Kudos
Reply