Intel® Integrated Performance Primitives
Deliberate problems developing high-performance vision, signal, security, and storage applications.

Normalizing/Rescaling Image in same bit depth

jcalcagni
Beginner
311 Views

I want to see the differance between signals picked up from a couple of 12 bit sensors that have different dynamic ranges and are only using part of the 12 bit sectrum. For example sensor one might have values from 300-2600 and sensor two's corresponding values from 450-3600. So I want to rescale or nomalize the signals mapped as an image so the values go from 0 to 4095(12 bit) or 65535(16 bit). The only way I have been able to figure to do this is to subtract the min value and then convert to floating point and multiple by a scaling factor and then convert back to unsigned short. This seems to be an extreamly ineffective way of doing this and I need this code to be as fast as possible. What frustrates me even more is that the scaling function only allows you to scale values between different bit depths. Another way I looked at doing it was to do the MulCScale which shows that what I want to do can be done in one opperation but does the exact opposite order of what I want to do. The MulCScale says thatdst_pixel = src_pixel * value / max_val where max_value is defined as the maximum value of the pixel data range. What I want to do is basicallydst_pixel = src_pixel * max_val / value. Is there some way of normalizing the images without having to convert to floating point that I'm just not seeing?

Thanks

John

0 Kudos
0 Replies
Reply