I need some explanation regarding the interpretation of the contents of pSrcEdge pointer argument to ippiDenoiseCAST filter. Should it point to an edge image? The documentation says "edge detection filtered image", but I am not able to understand if it should point to an image that has just been filtered to make edge extraction easy (like smoothed version etc) or the actual edges have to be calculated and passed to the function?
[cpp] ippiFilterSobelVert_8u16s_C1R(pData, iDataStep, dx, dxStep, size, ippMskSize3x3); ippiFilterSobelHoriz_8u16s_C1R(pData, iDatatep, dy, dyStep, size, ippMskSize3x3); ippiAbs_16s_C1IR(dx, dxStep, size); ippiAbs_16s_C1IR(dy, dyStep, size); ippiAdd_16s_C1IRSfs(dx, dxStep, dy, dyStep, size, 0); ippiConvert_16s8u_C1R(dy, dyStep, pEdgeData, iEdgeDataStep, size); ippiFilterDenoiseCAST_8u_C1R(pData, NULL, iDataStep, pEdgeData, iEdgeDataStep, size, pDataOut, iDataOutStep, NULL, ¶m);[/cpp]
Sorry for the late thanks, but I have asked this question so many times before without getting a reply, that I had almost lost hope. So, many thanks for the clarification. It really helps a lot.
Thanks. This really helped. Can you also clarify how the other fields in the params structure are affecting the whole filtering process? I have been able to reduce some blurring due to an apparant motion induced by features that change fast in the temporal direction. Can I control them further by playing around with some of the settings through the params structure?
In addition, if you could also point me to some relevant publications related to the method, that would help too.....
TemporalDifferenceThreshold, NumberOfMotionPixelsThresholdare used to determine whether the block is a static or a motion one: the block is considered a motion one if the number of pixels with values differing from the value of the co-located pixel in the previous frame by more than TemporalDifferenceThreshold exceeds NumberOfMotionPixelsThreshold.
GaussianThreshold: used to select the spatially adjacent pixels to be involved in the smoothing of the current pixel: only the pixels spatially adjacent to the current pixel, with values differing from that of the current pixel by less than GaussianThreshold, participate in the smoothing of the current pixel.
In ippiFilterDenoiseCAST_8u_C1R(), solely GaussianThresholdY is employed, while in ippiFilterDenoiseCASTYUV422_8u_C2R(), GaussianThresholdY is used for luma and GaussianThresholdUV for chroma.
HistoryWeight is the weight of the previous frame in the temporal denoising applied to the pixels of static blocks. If the function is called with pSrcPrev == NULL and pHistoryWeight != NULL, the per-block weights pointed to by pHistoryWeight are initializied to HistoryWeight (and further updated at calls with pSrcPrev != NULL).
As for the publications, the algorithm was developed at Intel, andI don't think that the detailed descriptionis publicly available.
Thanks. Your explanation did iron out some kinks in my understanding of the function. Could I also ask, since you mentioned that the method was developed at Intel, if there are any IP restrictuions in using this function in a commercial software?
does the StrongEdgeThreshold divide the pixels into edge and non-edge pixels? Also, I am not very clear of the "weight" in the function: dies it somehow control how much of the previous frame will be used to construct the current frame. With all parameters fixed, I have tried to play around with the edge and non-edge weights, but I do not see any difference in the output.
if I am processing a sequence of frames with this filter, of which ..... F(n-1), F(n), F(n+1) ... is a current context, is the following correct:
1) pSrcCurr = F(n),
2) pSrcPrev = CAST(F(n-1))
3) pSrcEdge = convolve(G, F(n)), where G is a gaussian kernel or pSrcEdge = sobel(F(n))
2-3) EdgePixelWeight/NonEdgePixelWeight define the share of the current "edge"/"non-edge" pixel valuein the formation of the output for the pixel, the higher the weight the more contribution is made by the current pixel and the less - by its neighbourhood.
And yes, you are right,the higher the HistoryWeight, the more is taken from the previous frame to form the output.
As for your experiments with the edge weights: perhaps, with the parameters/content you used, all the blocksturned out static, and thuswere denoised purely temporally.
You can try decreasing TemporalDifferenceThreshold and NumberOfMotionPixelsThreshold.