Software Archive
Read-only legacy content

SR300 Coded Light Pattern

Martin_E_
Beginner
504 Views

Hi folks,

I am writing my bachelor thesis about ambient light dependence of the RealSense SR300. My current problem is the understanding of the emitted laser pattern. It´s not absolutely necessary for my thesis but I would like to understand it. As samontab states in this thread https://software.intel.com/en-us/forums/realsense/topic/537872, the pattern looks like this: (I don´t know if he took the photo himself, maybe he could clarify it :-) )

intelPattern2.jpg

If this is the case, I am totally lost to understand how the camera ASIC can interpret it. Because it´s much more complicated (till impossible in my opinion) for the device to determine, which stripe is which (to solve the correspondence problem why coded light is used)

My previous assumption was, that the pattern of the SR300 is emitted like every other time-coded light pattern. For instance like this on the third slide: http://slideplayer.com/slide/9302938/ So that the whole scene is covered with one black and one white stripe for a second, after that with two black and two white and so on.. with this changing pattern on it, a point could be uniquely identified. Strange fact, sometimes all possible stripe combinations are illustrated at once for documentation or the like which looks exactly like the pattern on the above image. (like this on figure 4 https://www.osapublishing.org/aop/fulltext.cfm?uri=aop-3-2-128&id=211561)

So when I look at the above image, it looks like all different patterns are emitted at once which makes no sense to me. I also thought maybe the camera which captured the above image is simply to slow to capture each pattern alone but this also isn´t an explanation how this pattern could be formed. Can somebody please explain the used technique? Or correct my assumptions?

Thanks,
Martin

0 Kudos
3 Replies
samontab
Valued Contributor II
504 Views

Hi Martin,

Yeah, I took the image and posted it here. You'll see it posted in some other websites as well though...

From what I've seen, they seem to be Gray coded structured light column patterns with a configurable height being projected continuously on the scene.

You didn't post the other picture, which gives you an idea of how the patterns are actually being projected. You can see motion blur in this image. The patterns are moved vertically very quickly. The height of each pattern is defined by the IVCAM parameters such as MotionRangeTradeOff. On one side of that trade-off you will get closer to having a pattern projected on the entire scene, and then the next one, and so on as you say. On the other side of that trade-off you have a series of patterns with a very short height, each of them not covering the entire scene.

intelPattern1.jpg

Here is someone using a very similar pattern:

http://stackoverflow.com/questions/31781275/structured-light-how-to-do-when-the-projectors-resolution-is-lower-than-patte

0 Kudos
Martin_E_
Beginner
506 Views

Hi samontab,

thanks for your reply! 

Could you please tell me what kind of camera you have used? I have an old Sony DSC with a removable IR filter but even at the lowest exposure time I can´t see a similar pattern, only the motion blurred.

So when we have the series of pattern (with the adjustable height), how is it possible that the whole gray code "tree" is visible? As I said before, in "classic" coded light application, the patterns cover the same area one after another. Because each image point is marked with this illumination code sequence over time. Despite that the picture suggests that each image point only gets one "pattern" of the sequence. Am I wrong? And the picture shows only one pattern of the whole sequence? I hope you understand what I mean :-)

I hope I can get equal images like yours, to understand it directly.

Thanks!

0 Kudos
samontab
Valued Contributor II
506 Views

Sure, I used a Pi NoIR camera, with a very low exposure time. Can't remember exactly, but probably around 2-3ms. It's a bit tricky to get these images though. Note that this is a processed image, as the amount of IR light received in the frame is very low.

To understand this a bit more, have a look at the IVCAMAccuracy parameter.

https://software.intel.com/sites/landingpage/realsense/camera-sdk/v1.1/documentation/html/index.html?ivcamaccuracy_device_pxccapture.html

These are the options for the F200 camera (the one I used for taking these images):

IVCAM_ACCURACY_FINEST: The finest level of accuracy: Use 11 coded patterns at 50fps.

IVCAM_ACCURACY_MEDIAN: The median level of accuracy: Use 10 coded patterns at 55 fps. This is the default.

IVCAM_ACCURACY_COARSE: The coarse level of accuracy: Use 9 coded patterns at 60 fps.

Note that the new camera, SR300, only uses IVCAM_ACCURACY_FINEST, as the rest are deprecated.

If you look at those numbers, you can see that the IR projector has a frequency of either 540 or 550 fps, which means that each coded pattern is being projected for about 1.8ms. That's why you need exposure times so low.

With a combination of values for IVCAMAccuracy, and MotionRangeTradeOff, you can vary how the coded patterns will be read by the camera. For example, if you want to maximise detection, you can use the coarsest resolution in IVCAMAccuracy to set the IR projector to 540 fps, projecting 9 coded patterns for each frame, and you can set MotionRangeTradeOff to the maximum value to maximise exposure time of the camera.

0 Kudos
Reply