Using predictor 7 from the JPEG lossless standard we are experiencing problems encoding/decoding pixeldata.
We are using a generated image of a white rectangle on a black background (16 bit grayscale) for reproducing this behavior.
This picture is attached as correct.raw and is the source of the compressed files and expected result after decompression.
(Image width 300, height 300 and 16 bits per pixel, I also added a .PNG version for easier viewing)
We compress the source file lossless with IPP5.3.4 and get the JPEGLLPredictor7Intel.raw file.
If we decompress this file with IPP it results in the correct image.
But if we decompress it with Pegasus JPEG lossless library we get the following result JPEGLLPredictor7IntelDecompressedWithPegasus.png (only a screenshot, sorry but we can't extract the pixeldata at this point).
We compress the source file lossless with Pegasus and get the JPEGLLPredictor7Pegasus.raw file.
If we decompress this file with Pegasus it results in the correct image.
But if we decompress it with IPP we get the following result JPEGLLPredictor7PegasusDecompressedWithIntel.RAW.
It looks like the problem is occuring in ippiReconstructPredRow_JPEG_16s_C1 for the decoding part.
Are you sure that you are implementing predictor 7 correctly?
Your implementation seems to be consistent through encoding/decoding but not according to standard as far as we could reproduce the resulting values from the prediction.
We fixed the problem by writing our own function and ran into the same problem as you apparently have.
If you shift a signed integer the leftmost bit is kept.
int x = 0xFFFF;
int y = x >> 1;
y would again be 0xFFFF
If you use unsigned int, the result is as "expected".
unsiged int x = 0xFFFF;
int y = x >> 1;
y would be 0x7FFF