I've been using the UIC example for JPEG2000 to decode JP2 images for a while, and it has been working well while we developed software for our new recording system. HOwever, when the new hardware was ready, our software could not decode the JP2 files that the system produces. I've attached an example image. I don't believe I am changing anything, but it consistently fails in this function below (direct from the example):
I"m using the version of IPP v7.0 on a Win7/64 machine with 8 GB RAM. I develop in Visual Studio 2010. The compiler is Composer XE, purchased in Sept 2011. I'm not sure how to tell the version number on that.
I see in the document the following: "For JPEG2000Decoder, the ReadData method currently supports image buffers of only 32-bit signed integers in plain format. If the dataOrder parameter specifies a different image format, this method returns ExcStatusFail status."
This is for the destination buffer, and my understanding was that if our source were YCrCb interleaved 8 bit pixels, then the decompression would translate this to planar 32 bit values. Is my understanding wrong - can the decoder simply not handle interleaved YCrCb JPEG2000 images?
before "IM_ERROR ReadImageJPEG2000" function in the file jpeg2k.cpp. Change the type of diagnOutput variable in ReadImageJPEG2000 function from BaseStreamDiagn toMyDiagnStream.
Now you must be able at least to intercept the bad statuses from the decoder.
Set the breakpoint in above Write function. Looking at the status value doesn't give any good, but with the debugger you will be able to trace back the function calls.
As I can see the following warning/error statuses are generated:ImageSizeMismatchImageHeaderAndCodestreamBoxes, SOTTilePartLengthExceedActualLength. And fatal exception is in djp2marker.h file while readingone of code-stream tile headers.
The image itself looks good (with other file viewers), the problem could be in the JPEG2000 decoder.
Thanks for your response! I"ll try it out, but based on those error messages, I can think of a couple of reasons why the decoder is messing up:
1) We pad our tiles to be aligned to our hardware burst size (256 bytes). This padding comes after the ffd9 marker in each tile. We adjust the tile size in the SOT marker to match the padding, and so our tile size does not measure SOT to ffD9, but rather SOT to the next SOT. This is compliant with teh JPEG2000 spec, I believe. But, I can see how our tile size header does not reflect the number of usable bytes in teh tile code stream. Perhaps this is causing the decoder to complain?
2) Right now, we include a LOT of junk at the end of the image, since our system reads out a fixed amount of data from teh hard drive, and does not yet read out the precise amount of data. The result is that after the last image tile #400 (0x18f), the remaining data is useless leftover detritus. I believe the decoder should ignore it. But, if not, we can easily truncate the image to the correct length and get rid of the garbage at the end.
We have fixed problems in JPEG2000 diagnostics and 7.1 samples should contain (uic_transcoder_con sample in particular) "-d" option to show warnings and fatal exceptions, so you must be able to see what UIC J2K decoder does not like in input stream.