What particular functions you consider may benefit from ability to process more then 2GB buffers?
BTW, usually it is possible to split processing by chunks of data, which will increase data locality and minimize amount of memory required by application.
This is off topic, but I like all the parameters. Back in the IPL days I was either locked into the IPL image structure, or would have to take my parameters and copy them into an IPL structure. I like being able to wrap up image parameters in my own class and then just call the IPP routines without having to go through another image structure layer.
As Igor pointed out, memory mapped files are a common way to benefit from more than 2GB address space, and also large memory buffers such asthose of abig database or dataset, for example.
Splitting it into several calls of 2GB chunks may not be beneficial ifthebuffer is accessedsequentially. In fact, the performance may decrease due to the overhead of splitting data and doingmorecalls.
For instance, consider a memset function implemented withippsSet_8u. Instead of a single call, you would have to check if the size of the array is greater than 2GB, and then call it many times until the whole array is covered. This adds branching and arithmetic that would be avoided with a single call. I believe all IPP functions that handle arrays would benefit from this.
Regarding size_t, it is defined by the C standard, so it must be supported by all compilers. All C Runtime functions that handle memory buffers (memcpy, memset, strlen, etc)use it, because it conveniently assumes the correct size that your architecture can handle. But, of course therecouldbe acustom IPP type with the same behavior, just to be consistent.
Also, it would be transparent to current users, since the new parameterwouldhaveat least the size of the current int parameter, except maybe a warning due to a conversion from signed to unsigned, although this is not a big deal - no length can be less than zero anyway.
The architecture and design of IPP library is pretty well aligned with principles which were put for this product. I think I've already mention these principles so I do not see reason to repeat it again.
Hiding allocation and other staff in high level code by itself will not and should not improve performance. The point is that you can build higher level on top of low level library. If this higher level layer is designed well then you hopefully will not loose performance provided by optimized low level library and have a benefit of easier of use high level API. That is how complex software stacks are usually build in today's environment.
Thanks for your suggestion on the 64 bit length support. We noticed a few other feedbacks on that, and may consider it in the future release. To understand the important functions in the customer application, could you provide the list of the function used your application? You can use the following tool to find IPP APIs your application used:
If you do not want to publish it, you can submit it into our premier support website.