Intel® Integrated Performance Primitives
Deliberate problems developing high-performance vision, signal, security, and storage applications.
6704 Discussions

Why does ippiHistogramGetBufferSize() requires the ROI?

Rietschin__Axel
Beginner
349 Views

ippiHistogramGetBufferSize() requires the ROI as input parameter, however this is extremely unfortunate. Consider the case where one wants to compute the histogram of a large number of images of unknown size, that means what we need to allocate/initialize/free the spec buffers for each image, as demonstrated here: https://software.intel.com/en-us/node/529046

With the old API, I could initialize the levels only once for the lifetime of my process and call the ippiHistogramEven_8u_AC4R() millions of times on any sizes' images. This seems like a regression.

What happened to ippiHistogramEven_8u_AC4R() by the way? I only see ippiHistogram_8u_C4R() in the new API, meaning I need to allocate 4 buffers for the result instead of 3, even if the histogram of the (unused) Alpha channel is never going to be of any use?

Finally, the data type of the histogram bins is now Ipp32u (was Ipp32s before) but there is no in-place variant of ippsAdd() for 32u, which would be needed to add histograms together (like when computing an image's histogram in multiple bands).

It seems that a bunch of functions were dropped and changes made, making it impossible to update to the latest IPP and recompile existing code to take advantage of the latest CPU features :-(

Toughts?

Axel

0 Kudos
3 Replies
Igor_A_Intel
Employee
349 Views

Hi Axel,

you should use maximum expected ROI for GetSize and then you can use Initialized spec "million" times. This is done in such way in order to remove internal memory allocations.

regards, Igor.

0 Kudos
Rietschin__Axel
Beginner
349 Views

Thanks Igor, maybe this should be added to the documentation?

I made some experiments and the sizes I get from GetSize seems to be the same regardless of the size of the ROI, so I used 1x1 for the time being. Since the ROI finally appears not to be used (the only change of behavior I could observe is when I passed 0x0) maybe the doc should include a note.

Thanks,
Axel

 

0 Kudos
Igor_A_Intel
Employee
349 Views

Hi Axel,

sorry, you are right - buffer size doesn't depend on roiSize in the current implementation - so roiSize is "reserved" parameter. It's checked in the Badarg test for greater than zero values and bad status is returned in case of width and/or height are <= zero. Buffer size depends on the number of channels and data type only.

regards, Igor

0 Kudos
Reply