I'm having a hard time finding the technical details of how the depth camera for the cameras work.
Could anyone point me to where I could find that?
Do the depth cameras work just like the kinect? Is the IR laser just projecting points that the IR camera reads? I really want to learn the details of how it works for my project where I am attempting to use two F200/SR300s' depth streams for real time hand tracking, if it's possible.
I'm currently running into IR interference issues and have not found a solution yet.
Colleen Culbertson (Intel) wrote:
The front facing cameras work differently than the R200 - so please clarify which camera you need info on.....
Hi Colleen, it would be great if I could get info on both the R200 and F200/SR300.
The R200 has right and left depth(IR) cams, so works like your eyes. See https://software.intel.com/en-us/articles/realsense-r200-camera
The F200/SR300 have more dependency on the IR projector and single IR cam with patterned light - see https://software.intel.com/en-us/articles/a-comparison-of-intel-realsensetm-front-facing-camera-sr30...
Would also suggest reading this forum topic: https://software.intel.com/en-us/forums/realsense/topic/543419
How the F200/SR300 works is explained here:
Basically it projects a series of IR "band" patterns changing through time. Have a look at the previous link for more details about this.
The R200, on the other hand, uses a static IR "dot" pattern, similar to how the Kinect 1 works as well as passive IR stereoscopy. Note that the Kinect 2 uses another technology, time-of-flight.
Now, when you use more than one of these cameras, because they are active you need to make sure that they are not projecting their own IR pattern in an overlapping region since it will mix their patterns and reduce the depth sensing ability for both. You could move them spatially to cover different areas, or move them through time, switching one camera's IR projection off while the other camera is projecting the pattern and estimating depth, and vice versa.