- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I have a C++ application which calls a Fortran DLL. It works flawlessly (though slowly) when I run in Debug mode with all runtime error-checks switched on, but crashes each time I try to run it in Release mode. Although the stack trace information in given by Visual Studio when running in Release mode is not fully reliable, it indicates that the halt took place while running the DLL code.
To try to isolate the problem, I have been trying to run my application by either calling the Debug version of the DLL from the Release version of the C++ main application, or vice versa. However, all attemps to do either of these have ended in an error "R6034 - An application has made an attempt to load the C runtime library incorrectly".
My background reading so far has indicated that the usual cause of such an error is the lack of a manifest file, but I have ensured that I have the "Generate manifest file" option set to "yes" in the linker menu. I have also tried making sure that both the C++ application and the Fortran DLL link against the same runtime libraries (usually "Multithreaded debug DLL").
I'm running IVF V11.1.051 in Visual Studio 2008 on 32-bit Vista. Any ideas about where I should look next would be greatly appreciated.
Thanks,
Stephen.
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
My suggestion is to not bother with the "debug DLL" library - this is useful mainly for C/C++ code. It has no effect on your ability to debug your own DLL.
How does it "crash"? The Release mode enables optimizations, and if your code is incorrect, this can cause problems. (A compiler error can't be ruled out either, but is less likely.)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
When the Release code fails, it greys out the application's main window and posts a standard "
As this is a video-processing application making use of large arrays, the most common cause of stack overflows I've seen during development is when a temporary array defined on the stack is given an erroneous dimension, for example because the variable used to determine the dimension is not properly defined. I'd expect this sort of error ot be caught when running in debug mode, however, so I can't see why there should be such a difference in behaviours between the two modes.
Stephen.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I guess it's time for me to have a good look around to see where I'm putting biggish arrays on the stack; it may be best if I keep the stack for one-dimensional arrays and put anything bigger than this on the heap anyway.
Thanks,
Stephen.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Stephen.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Stephen.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
It looks like there have been two separate issues here...
Some very strange things happened when I started changing the Debug configuration, and eventually I decided to delete my Visual Studio project and create a brand new one. With the brand new project, the "access violation" problem went away.
I've tracked the "stack overflow" problem to a routine where the program does some bit-manipulations to turn an array received as C unsigned chars into a false-colour 32-bit ARGB image. For example, in a loop where 'nx' and 'ny' represent the dimensions of the image:
[bash] character(c_char), intent(in), dimension(:,:) :: uchar_image
integer(4), intent(out), dimension(:,:) :: ARGB_image
do j=1,ny
do i=1,nx
! First we transfer the C unsigned char bitwise
! into the 8 least significant bits of a conventional
! Fortran 32-bit integer
ARGB_image(i,j) = transfer(uchar_image(i,j), nx)
! Recall that all the second argument does
! is to provide the container type
. (other lines commented out)
.
end do
end do[/bash]
... will cause a stack overflow in Release mode despite working fine in Debug mode.
I can't actually see what gets put on the stack here, nor why using /heap-arrays0 should make any difference, as the arrays uchar_image and ARGB_Image are both allocated in the C++ application and only handled elementally in the Fortran routine.
Stephen.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Actually it doesn't fail on the first loop, but only after several thousand pixels have been processed.
Stephen.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
...but it's definitely at the "transfer" statement that it fails.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Let me suggest the following as an alternative to the TRANSFER - I assume you realize that with TRANSFER, the upper 24 bits of the result are undefined and it is the necessity of constructing this value that triggers the creation of the stack temp.
ARGB_image(i,j) = ZEXT(ICHAR(uchar_image(i,j)))
This will move the 8 character bits into ARGB_IMAGE and fill the rest with zero. The generated code for this is also much better. ZEXT is an extension - you could also use:
ARGB_image(i,j) = IBITS(ICHAR(uchar_image(i,j)),0,8)
though this might be a bit slower.
Lastly, you want CHARACTER(KIND=C_CHAR) in the declaration of uchar_image. What you have works by coincidence since C_CHAR is 1 in our implementation, but that's not what you mean.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Can you post a complete subroutine that shows the problem?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Steve,
By following your advice I seem to have solved the problem, but for the record here is the full routine in its two versions - the original which gave the stack overflow at the first "transfer" when in Release mode, and the new version I wrote today which, besides being much more elegant and almost certainly faster, runs in Release mode without error. The routine takes a monochrome image of a laser beam profile and 'colours it in' according to the separately-provided wavelength. For testing purposes the "RGB_from_Wavelength" function could be replaced by anything that returns real numbers in the range (0, 1) for the variables 'red', 'green' and 'blue'.
Here's the old, bad version:
[bash] subroutine ConvertFormats_Char(uchar_image, wavelength, ARGB_image)
use iso_c_binding
character(c_char), intent(in), dimension(:,:) :: uchar_image
real(kind(1d0)), intent(in) :: wavelength
integer(4), intent(out), dimension(:,:) :: ARGB_image
integer :: i, j, nx, ny
real(kind(1d0)) :: red, green, blue
! Begin by finding the RGB components corresponding to the current wavelength:
call RGB_from_Wavelength(wavelength, red, green, blue)
nx = size(ARGB_image, 1)
ny = size(ARGB_image, 2)
!DEC$ LOOP COUNT (800)
do j=1,ny
!DEC$ LOOP COUNT (800)
do i=1,nx
! First we transfer the C unsigned char bitwise into the 8 least
! bits of a conventional Fortran 32-bit integer
ARGB_image(i,j) = transfer(uchar_image(i,j), nx) ! Recall that all the
! second argument does
! is to provide the
! container type
ARGB_image(i,j) = iand(ARGB_image(i,j), int(Z'000000FF',4))
! Necessary because the standard doesn't specify
! what 'transfer' puts into the leftmost bits.
! Now we calculate the red component, and shift it eight bits to the left:
ARGB_image(i,j) = int(real(ARGB_image(i,j), kind(1d0)) * red, 4)
ARGB_image(i,j) = ishft(ARGB_image(i,j), 8)
! We then add the green component, and shift a further eight bits to the
! left; this looks necessarily ugly because we're doing the the 'transfer
! & iand' inline, to avoid creating any new variables:
ARGB_image(i,j) = ior(ARGB_image(i,j), &
int(real(iand(transfer(uchar_image(i,j), nx), &
int(Z'000000FF',4)), kind(1d0)) &
* green, 4))
ARGB_image(i,j) = ishft(ARGB_image(i,j), 8)
! Now we add the blue component
ARGB_image(i,j) = ior(ARGB_image(i,j), &
int(real(iand(transfer(uchar_image(i,j), nx), &
int(Z'000000FF',4)), kind(1d0)) &
* blue, 4))
! Now we need to set the most significant byte - the 'A' - to 0xFF:
ARGB_image(i,j) = ior(ARGB_image(i,j), int(Z'FF000000',4))
end do
end do
end subroutine ConvertFormats_Char[/bash]
... and here's the new, good version:
[bash] subroutine ConvertFormats_Char(uchar_image, wavelength, ARGB_image)
use iso_c_binding
character(kind=c_char), intent(in), dimension(:,:) :: uchar_image
real(kind(1d0)), intent(in) :: wavelength
integer(4), intent(out), dimension(:,:) :: ARGB_image
integer :: i, j, nx, ny
real(kind(1d0)) :: charval, red, green, blue
! Begin by finding the RGB components corresponding to the current wavelength:
call RGB_from_Wavelength(wavelength, red, green, blue)
nx = size(ARGB_image, 1)
ny = size(ARGB_image, 2)
!DEC$ LOOP COUNT (800)
do j=1,ny
!DEC$ LOOP COUNT (800)
do i=1,nx
! First we transfer the C unsigned char bitwise into the 8 least
! bits of a conventional Fortran 32-bit integer, then convert it to a double:
charval = real(zext(ichar(uchar_image(i,j))), kind(1d0))
! Now we calculate the red component, and shift it eight bits to the left:
ARGB_image(i,j) = int(charval * red, 4)
ARGB_image(i,j) = ishft(ARGB_image(i,j), 8)
! We then add the green component, and shift a further eight bits to the
! left;
ARGB_image(i,j) = ior(ARGB_image(i,j), int(charval * green, 4))
ARGB_image(i,j) = ishft(ARGB_image(i,j), 8)
! Now we add the blue component
ARGB_image(i,j) = ior(ARGB_image(i,j), int(charval * blue, 4))
! Now we need to set the most significant byte - the 'A' - to 0xFF:
ARGB_image(i,j) = ior(ARGB_image(i,j), int(Z'FF000000',4))
end do
end do
end subroutine ConvertFormats_Char[/bash]
Thank you for your help with this.
Stephen.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page