Can one computer support two device? For example, connect two R200 cameras to one computer.
What I want is a double width of recognition of R200. The documentation of R200 says its depth is 3-4 meters indoor. I assume its width is the same as its depth. My real place is about 7 meters wide, so technically I need two R200 in a line. So my question is, can one computer support two R200 and recognize them?
If the answer is yes, what I want to do is to recognize people in the space moving. Then there is another question, how many people can be recognized in the space?
Thanks for your help!
@Mahadeo W. this is really good news, how did you make it? Two cameras working simultaneously? Any APIs supported this or you wrote your own algorithm?
I saw people said no original APIs in 2015 in a post here:
As Douglas said above, librealsense (https://github.com/IntelRealSense/librealsense) does support multiple RealSense cameras. And it should work on Linux, Mac OS AND Windows. But it only provides the raw data, non of the advance algorithms the official SDK provides. But the official SDK only supports one camera (and only on windows).
dang, i was not aware that windows 10 is a supported platform. if you fellows try to build it and it does so successfully please let us know actually, an update on whatever you do would be nice. good luck.
Hey Jonathan, that is not always true.
R200 uses a combination of active and passive sensing.
If you are using active sensing, they WILL interfere with each other. If, on the other hand, you only use the passive IR stereo method, then they will NOT interfere with each other.
The R200 camera does not use any temporal or spatial patterns in its emitter that would interfere with another R200. If you have two R200s pointed at the same object with both of their emitters active, you will not find any interference of the type you'd see if you tried the same with the F200 or the older Creative Senz3D (erratic data, disappearing bands of pixels, pulsing images, etc).
If you have tried this with the R200 and have had issues, then I'd suggest you check your infrared stream where you might see that the two overlapping emitter fields may be over-saturating the IR image on closer object. Simply turning on the LR Autoexposure option will correct that.
I suggest you try this, and post images to this forum, and if you're still having issues then I or one of my RealSense teammates can help you diagnose the issue.
I don't see how that is possible. I'm curious now to see how it would work.
As far as I understand, R200 projects an IR dot pattern into the scene to calculate depth. Similar to how the Kinect 1 works.
If you put a second R200 into the same scene it would project its IR pattern as well, interfering with the first one. It makes sense, and I've seen this interference with two Kinect 1 devices. There are some mechanism to alleviate this, like vibrating one device to make the pattern blurred for the other camera for example, which was presented in a paper by Microsoft itself.
How is the R200 projected pattern different than the Kinect 1 so that it doesn't result in interferences?
The R200 is fundamentally a stereo camera system that uses infrared instead of RGB, so it will work even if the emitter is disabled if there is both enough ambient IR (from sunlight or an incandescent or halogen bulb) AND enough recognizable features that both the IR sensors can identify in the scene.
The emitter's function is to fill in when there isn't enough of either or both, and a second or third emitter just adds more and more texture, which is great up until there's so much IR that it oversaturates (whites out) the sensor. Usually the auto-exposure can be turned on and you're fine.
If you try the R200 outside (though, not the F200), you'll probably find that highly-textured objects like grass or rough concrete show up just fine in the depth map, with or without the emitter turned on, (as long as you remember to turn on auto-exposure again, since sunlight's IR is quite a bit brighter than the sensor's normal expected range), but smooth untextured objects won't have enough natural feature points for the cameras to stereo-match on.
Other 3D cameras may work as you describe, by comparing the detected pattern against an internally-stored version, but this is not how the R200 functions.
Got it. I was wrongly assuming how the R200 worked.
Thanks for the explanation. As you said, the R200 is basically an IR stereo depth camera.
And, you're right, because of this, even if you point multiple R200s into the same region, you should still get correct depth information without interference. Great!