I wish to integrate volume (sceneperception's DoReconstruction) by hand using my own camera pose tracking system. If I enable sensemanager and sceneperception, DoReconstruction return status code -501, which seems to mean that something is not initialized in sceneperception. To initialize it, one frame need to be processed by the pipeline (AcquireFrame) and after, the DoReconstruction works.
My problem is that I want to be able to use DoReconstruction without having to integrate the data from the first frame. Furthermore, I found out that EnableSceneReconstruction(false) only works after that the first frame is processed. I found a hack where I hide the camera at the first frame so the volume initialize empty, but as I said, it is a hack :). Is there a way to use Reconstruction as a "standalone" system : I mean, do we have a way to initialize the volume empty and simply call DoReconstruction(image, pose) to populate the volume?
I have a few related questions:
1- Does the tracking algorithm use the volume data?? Or they are independent like most slam/volume integration algorithms? Do we have access to information about those algorithms? paper, blog etc.
2- Can someone confirm me that TrackingQuality use volume integration? My tests shows that trackingquality drop if there is no volume reconstruction in the viewpoint even if the pose estimation seems fine.
Sorry for late reply.
A little more sophisticated workaround, Start scene perception (un pause and reset) when scene quality is less than 0.15 (unlike in sample where it is started when scene quality is >= 0.15). since we do not track well when scene quality is low therefor we do not integrate in that case first frame will not be integrated and you should be able to integrate after that and also able to disable extend reconstruction. easiest way to get lower scene quality is to point to scene which is more than 3 meter from camera or point towards ceiling. At this moment disabling reconstruction and do reconstruction requires first frame to be passed. so that co-ordinate system is set correctly. However we will investigate feasibility of removing this constraint and will update the release notes if everything checks out.
For the first question, Tracking module is consist of several trackers. essentially depth, color, gravity and inertial trackers. At this moment only depth based tracking uses accumulated volume data for tracking, therefor depth based tracking cannot track scene which doesn't have close over overlap with scanned area.Depth tracker uses dense surface matching approach. Some of the relevant research please refer: Newcombe, Richard A., et al. "KinectFusion: Real-time dense surface mapping and tracking." Mixed and augmented reality (ISMAR), 2011 10th IEEE international symposium on. IEEE, 2011. Color based tracking is something similar to feature based tracking algorithms. Other trackers are only active if you have necessary sensors on the device.
For the second question, At this moment we are using aggressive criteria for volume integration. meaning that scene perception will not accumulate/integrate data if scene quality is not high. However we would stress that this is strictly implementation specific and may change in future. This is done to specifically avoid volume corruption due to false positive pose estimation. If no reconstruction -> accuracy was not high-> depth based tracking uses reconstructed volume for tracking so it would probably failed but other trackers (color, gravity and inertial) are probably giving a reasonable guess but can be little less accurate than having it working with successful depth based tracker.
Thanks for the great answer, I was not sure if scene perception had a color tracker, good to know that this information is also used (even if RGB data is noisy!)