Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Granger_L_
Beginner
71 Views

Where to find technical details on how the depth camera works?

I'm having a hard time finding the technical details of how the depth camera for the cameras work.

Could anyone point me to where I could find that?

Do the depth cameras work just like the kinect? Is the IR laser just projecting points that the IR camera reads? I really want to learn the details of how it works for my project where I am attempting to use two F200/SR300s' depth streams for real time hand tracking, if it's possible.

I'm currently running into IR interference issues and have not found a solution yet.

0 Kudos
4 Replies
Colleen_C_Intel
Employee
71 Views

The front facing cameras work differently than the R200 - so please clarify which camera you need info on.....

Granger_L_
Beginner
71 Views

Colleen Culbertson (Intel) wrote:

The front facing cameras work differently than the R200 - so please clarify which camera you need info on.....

Hi Colleen, it would be great if I could get info on both the R200 and F200/SR300. 

Colleen_C_Intel
Employee
71 Views

The R200 has right and left depth(IR) cams, so works like your eyes. See https://software.intel.com/en-us/articles/realsense-r200-camera

The F200/SR300 have more dependency on the IR projector and single IR cam with patterned light - see https://software.intel.com/en-us/articles/a-comparison-of-intel-realsensetm-front-facing-camera-sr30...

Would also suggest reading this forum topic: https://software.intel.com/en-us/forums/realsense/topic/543419

samontab
Valued Contributor II
71 Views

How the F200/SR300 works is explained here:

https://software.intel.com/en-us/forums/realsense/topic/537872#comment-1810928

Basically it projects a series of IR "band" patterns changing through time. Have a look at the previous link for more details about this.

The R200, on the other hand, uses a static IR "dot" pattern, similar to how the Kinect 1 works as well as passive IR stereoscopy. Note that the Kinect 2 uses another technology, time-of-flight.

Now, when you use more than one of these cameras, because they are active you need to make sure that they are not projecting their own IR pattern in an overlapping region since it will mix their patterns and reduce the depth sensing ability for both. You could move them spatially to cover different areas, or move them through time, switching one camera's IR projection off while the other camera is projecting the pattern and estimating depth, and vice versa.

Reply