I'm trying to implement something similar to the Kabsch algorithm (called Wahba's problem in other fields).
Basically, I have a model image and a scene image and I would like to figure the best match between both images. For the output, I would get a rotation angle and X and Y offsets. The rotation angle isn't that important so if I can't find it easily, I think we can live without it.
I'm looking at the IPP API and I can't find anything that looks like it could help me. Maybe I missed something?
Maybe that the images proximity measures functions could help but I don't see how I could use any of them to get my X and Y offsets.
It's been a while since I needed to do that, but in the past I just convolved the template over the image andwent with the best match. However, I've always used small template sizes (8x8 or 16x16), and it will not match rotations. Depending on how difficult it is to describe your model, you might be able to use a generalized Hough transform (not sure if IPP supports that though).
We thought of a convolution with the template but we are trying to get something to work in real time.
We can easily get a binary image with a cloud of points and that is what we want to match with our model. Unfortunately the generalized Hough transform will be of little or no use (and IPP support the Hough Transform). That is why we were looking for something like a singular value decomposition of the image and then we would figure the principal components in our images ... or maybe simply a 2d cross correlation.