Intel® Integrated Performance Primitives
Deliberate problems developing high-performance vision, signal, security, and storage applications.
6708 Discussions

Converting an image with n-bit bit depth to a 16-bit bit depth image. (where 8 < n < 16)

KK__Sajas
Beginner
318 Views

I need to call the following IPP methods on an input image data.

  • ippiColorToGray
  • ippiLUTPalette
  • ippiScale (just for 16 bit images)
  • ippiCopy
  • ippiSet
  • ippiAlphaComp

I was using the 8-bit and 16 bit versions of this methods till now. But now we also allow 12 bit images as input. For ippiLUTPalette, I see that we can pass the bitSize that we are dealing with. But for the other API's we don't have it.

One approach I was thinking of trying out was to convert the images that has a bit depth falling between 8 and 16 bit to a 16 bit image and continue working on the result. I believe, ippiScale performs such conversions. But I couldn't find a flavor of it that works on bit depths other than 8, 16 & 32.

Is there a way to perform this conversion?

Or Is it possible to call the before mentioned APIs on images with bit depths other than 8 and 16 bits?

0 Kudos
1 Reply
Chao_Y_Intel
Moderator
318 Views

Hi Sajas, 

How are the 12-bit image data stored now?   Are they just stored as two-bytes of data, and ignore the high bit?  or  are they stored in other ways? 

thanks,
Chao

0 Kudos
Reply