- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
You could subtract 0x8000 from pixel values and than filter the image with all IPP 16s filters
If filter coeffs are normalized (sum==(1^-scaleFactor)) after filtering you could add back 0x8000
Thanks,
Alexander
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Ok, I understand.
How does Intel think about 16 bit grayscale pixels generally? Is the current technical design that it is best to work with 16s? My app is using 16u for grayscale pixels. Why would 16s be better than 16u?
Or, should I keep my 16 bit pixels as 16s throughout my app?
I perform LUT (to 24bpp), filters, zoom, jpeg, bmp.
Thomas
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
One problem is that 16u data are non negative. Many filters (es Sobel, Scharr etc) can produce negative result so the output can be either 16s scaled or 32s.
Another thing is "native" 16u filters (rather with 16s scaled output, or with 16u output for the kernel without negative elements). Now IPP does not have such filters. You could submit a feature request for such filters to Intel premier support, we had no such request before.
Alexander

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page