- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I am writing a feature tracker based around ippiOpticalFlowPyrLK_8u_C1R. Most of the time it works really well, but occasionally I get a frame where all features get tracked to a nearly featureless section of ground near my target. When this happens, the error for all of the points gets really high.
I am curious if anyone else has seen this and what can be done to fix this.
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Pete,
it would be nice if you can provide us with input parameters of the function for that particular case. It might help our experts to investigate that issue.
You also may try to link your program with PX version of IPP libraries (if you use static link you can call ippStaticInitCpu(ippCpuUnknown) at the beginning of your app or if you use dynamic link you can remove all cpu-specificDLLs except those which have 'px' in the namefrom folder where your application is located) to check ifthe issue related to optimized code or not.
Regards,
Vladimir
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The function call looks like this:
ippiOpticalFlowPyrLK_8u_C1R( pyr1, pyr2, prevPoints, nextPoints, status, error, 9, 51, 4, 10, 0.0001, OF_state );
the constants were those found to work best, but they still have the occasional incorrect convergence.
I also tried just using the 'px' libraries, but I still get the same result.
Thanks!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Pete,
I think it might also depend on input data, could you please to write to a file values of input arrays before this function call? Additionally, I think it is also important to know parameters which you use for ippiOpticalFlowPyrLKInitAlloc_8u_C1R call.
Regards,
Vladimir
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
Really, it is possible, especially if the neighbourhood of the tracked point contains no features. I saw such situations for frames from synthetic video. If ther is an edge or an angle or smth like this about the point it is tracked ok on the next frame. If the window about the point contains no details it could go far from the real point.
If you send the test case (pictures, point coordinates and function arguments) we could answer in more detailed way.
You could try to increase the window size to catch image features better
Thanks,
Alexander
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I am attaching an excel file to this post. It contains all of the data given to and coming from OpticalFlowPyrLK. There are pairs of lines, with the first line being the input, and the second being the output. Hopefully this gives you some data that could help with this problem.
Additionally, I setup all data (including OpticalFlowPyrLKInitAlloc) like this:
FeatureTracker::FeatureTracker( int imgwidth, int imgheight ) :
m_ImageWidth( imgwidth ),
m_ImageHeight( imgheight ),
m_InitFlag( true ),
m_pPrevPoints( NULL ),
m_pNextPoints( NULL ),
m_pStatus( NULL ),
m_pError( NULL ),
m_TargetWidth( 80.0 ),
m_TargetHeight( 40.0 ),
m_DebugFlag( false ),
m_DebugFile( ),
m_NumFeats( 9 ),
m_MaxPyrLevel( 4 ),
m_LkWinSize( 55 ), //41
m_LkIter( 10 ), //5
m_LkThresh( 0.0001f ), //0.03
m_LocationX( 0.0f ),
m_LocationY( 0.0f ),
m_pCurrentFrame( NULL ),
m_FeatToUpdate( 0 )
{
m_pPrevPoints = (IppiPoint_32f*)ippsMalloc_32f( m_NumFeats*2 );
m_pNextPoints = (IppiPoint_32f*)ippsMalloc_32f( m_NumFeats*2 );
m_pStatus = ippsMalloc_8s( m_NumFeats );
m_pError = ippsMalloc_32f( m_NumFeats );
m_pKernel = ippsMalloc_16s( 5 );
m_pKernel[2] = 12288;
m_pKernel[1] = m_pKernel[3] = 8192;
m_pKernel[0] = m_pKernel[4] = 2048;
IppiSize zeroLevelSize = { imgwidth, imgheight };
ippiPyramidInitAlloc( &m_pPyr1, m_MaxPyrLevel, zeroLevelSize, 2.f );
ippiPyramidInitAlloc( &m_pPyr2, m_MaxPyrLevel, zeroLevelSize, 2.f );
ippiPyramidLayerDownInitAlloc_8u_C1R( (IppiPyramidDownState_8u_C1R **) &m_pPyr1->pState, zeroLevelSize, 2.f, m_pKernel, 5, IPPI_INTER_LINEAR);
ippiPyramidLayerDownInitAlloc_8u_C1R( (IppiPyramidDownState_8u_C1R **) &m_pPyr2->pState, zeroLevelSize, 2.f, m_pKernel, 5, IPPI_INTER_LINEAR);
for( int i=0; i<=m_MaxPyrLevel; i++)
{
m_pPyr1->pImage = ippiMalloc_8u_C1( m_pPyr1->pRoi.width, m_pPyr1->pRoi.height, &m_pPyr1->pStep );
m_pPyr2->pImage = ippiMalloc_8u_C1( m_pPyr2->pRoi.width, m_pPyr2->pRoi.height, &m_pPyr2->pStep );
}
ippiOpticalFlowPyrLKInitAlloc_8u_C1R( &m_pOFState, zeroLevelSize, m_LkWinSize, ippAlgHintAccurate );
// create space for feature finding
IppiSize imgsize = { imgwidth, imgheight };
int buf_size;
ippiMinEigenValGetBufferSize_8u32f_C1R( imgsize, 3, 5, &buf_size );
m_pEigenBuffer = ippsMalloc_8u( buf_size );
m_pEigenVals = ippiMalloc_32f_C1( imgwidth, imgheight, &m_EigenStep );
m_pEigenMask = ippiMalloc_32f_C1( imgwidth, imgheight, &m_EigenMaskStep );
}
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
MADakibkal1:
Really, it is possible, especially if the neighbourhood of the tracked point contains no features. I saw such situations for frames from synthetic video. If ther is an edge or an angle or smth like this about the point it is tracked ok on the next frame. If the window about the point contains no details it could go far from the real point.
If you send the test case (pictures, point coordinates and function arguments) we could answer in more detailed way.
You could try to increase the window size to catch image features better
Alexander,
It is very difficult for me to send images, as they are not in a standard format. I did however just reply to Vladimir with some data. Perhaps that could be a start.
The images that I am using are real images (not synthetic) of ground targets, shot from a plane. When I view feature space, using ippiMinEigenVal, I can clearly see good features being tracked, right up until all features converge to a featureless patch of ground. I have some logic to prevent incorrect convergence (by reacquiring all features from the last good frame) but it doesnt seem to work in this case for some reason (it does fix other cases though).
When you say to increase the window size, you mean the win size parameter in OpticalFlowPyrLKInitAlloc and OpticalFlowPyrLK, right?
Also, is the neighborhood around the feature point a fixed size? I ask because changing the win size seems to have no effect on the error array (errors should get larger as the neighborhood is increased).
Thanks
Pete
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page