- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
most of the data structures and functions paramters that refer to the image dimensions are of type int. Usually with a 32 bit int we can represent numbers up to 2147483647. This said, my quuestion is the following.
Can the image processing functions of IPP handle images where the linear index to the raw data is larger than the maximum number representable with an int on a 64 bit platform with sufficient memory? If this is not possible, what is the maximum size allowed for an image to be manipulated using IPP function?
For example, suppose I am dealing with image that is RGBA interleaved, whose dimensions are 50000 x 50000 pixels. Suppose IPP internally needs to access the pixel at position (49000, 49900). Then the offset from the base pointer to the raw data containing the pixel information is:
offset = 49900 * step + 49000 * 4
where, disregarding for the sake of simplicity alignement issues. the step is:
step = 50000 * 4
Thus we obtain:
offset = 9980196000 > 2147483647
which is clearly larger than the maximum number which can be represented using an int. Will IPP functions work?
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello,
you are right, current IPP functions can't address data bigger than 2Gb in size (because of signed 32-bit integer limited range).
There are basically two approaches to work with such big data:
1. process huge data arrays by chunks. We do provide IPP tiling image processing sample, you can find it under image-processingimage-tiling folder of IPP sample package
2. provide functions capable to access huge data arrays because of 64-bit parameters. That is what we are considering as one of the possible future features of IPP product and really interested in your feedback on that. How it is important for you? Do you see a value of having a library to work with huge data arrays in-memory? What IPP functionality do you think may benefit most of such feature?
Regards,
Vladimir
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
thanks a lot for your prompt answer. Here are some considerations regarding the possibility to process large images.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
That's true, but the width/height only serve to control the outer loops of the primitives, converting x,y pairs into 128-bit SSE block pointers. All ippi functions take the IppiSize argument by copy (as opposed to by reference or pointer), so it should be trivial to redefine it as :
typedef struct { size_t width; size_t height; } IppiSize;
with primitives replacing 'int' and using something like
for (size_t y = 0; y < roiSize.height; ++y)
for (size_t x = 0; x < roiSize.width; x += sizeof(__m128i))
{
// SSE / assembly
}
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello Paul,
is there any update regarding the implementation date for this change to unsigned int.
Thanks,
Ronen.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear IPP developers,
It has nothing to do with the type used for image dimensions. The images we have get as large as 2GB not because of their individual dimensions being large, but because the stride*height product is large. Yes, somebody may have an image 8 Gpx wide and just 1 px tall, in which case there is no way supporting it without changing the interface, but this is an extremely ridiculous scenario.
The problem most people are likely to face (including the OP and me) can be solved by you fixing the address calculations inside the IPP implementation, without any interface change or breakage. Some functions appear to do the correct 64-bit pointer arithmetic on 64-bit machines, but some, like RGBToGray_8u, appear to count byte offsets from the base pointer in 32-bit registers. As others have noted, Paul Fischer's "IPP uses SIMD" argument is simply not true because the address calculations are done in general-purpose registers.
Please fix this problem, as >4GB machines are even more common nowadays than 5-years before, and the computational tasks we face do use images that take that much memory.
Best regards,
Yakov
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Yakov,
Thanks for the feedback. Do you have some specific functions that we have a further check? As you know IPP includes thousand of the functions, it is not easy to add such feature for these general functions. If you have a few functions, we could have further check if we can optimize on that.
Thanks,
Chao
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
I'm using IPP version 2021.1
It seems that this problem exists for ipprResize_32f_C1V.
When my destination image volume exceeds the maximum signed integer value (2,147,483,647), I get a segmentation fault.
Is this resolved in a newer version?
Tomer
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello,
just my 2 cents concerning the general thread topic...
From my point of view it is desireable to have the whole IPP able to work not with int parameters, but instead working with size_t parameters. The 64-bit world is now and on-going.
>>That is what we are considering as one of the possible future features of IPP product and really interested in your feedback on that.
>>How it is important for you? Do you see a value of having a library to work with huge data arrays in-memory? What IPP functionality
>>do you think may benefit most of such feature?
Our company works with huge arrays of data, also image data. One big disadvantage for our IPP usage was that the ippiRemap... functions are so limited. If you (Intel) ask what functions should be extended next, f.i. the function ippiRemap_32f_C1R is one of so many we would like to see in a 64-bit universe. However, every IPP function (allocation, signal processing, image processing) should be able to work in a 64-bit size_t world. Otherwise, we the developer, have always to code the whole thing piece by piece, what needs a lot more development time and testing time to reach the goal.
Kind regards
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello IPP Users,
This thread was created in 2009, more than ten years ago. Can you test IPP latest version, if the issues is still exist for your project, please file a new one issue, as well as providing out simple reproducer and the reproduce steps for the issue. Thanks.
Best Regards,
Ruqiu
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page