- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi there,
Can one computer support two device? For example, connect two R200 cameras to one computer.
What I want is a double width of recognition of R200. The documentation of R200 says its depth is 3-4 meters indoor. I assume its width is the same as its depth. My real place is about 7 meters wide, so technically I need two R200 in a line. So my question is, can one computer support two R200 and recognize them?
If the answer is yes, what I want to do is to recognize people in the space moving. Then there is another question, how many people can be recognized in the space?
Thanks for your help!
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
last i heard only the linux librealsense package can do the two camera. so if you are on windows with no access to a linux system you will not be able to do this.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@Douglas K. thanks a lot for your answer. I'll try that on linux.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Yes it supports two camera..i tested windows 10 using transcend USB hub..
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@Mahadeo W. this is really good news, how did you make it? Two cameras working simultaneously? Any APIs supported this or you wrote your own algorithm?
I saw people said no original APIs in 2015 in a post here:
https://software.intel.com/en-us/forums/realsense/topic/543198
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
As Douglas said above, librealsense (https://github.com/IntelRealSense/librealsense) does support multiple RealSense cameras. And it should work on Linux, Mac OS AND Windows. But it only provides the raw data, non of the advance algorithms the official SDK provides. But the official SDK only supports one camera (and only on windows).
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
dang, i was not aware that windows 10 is a supported platform. if you fellows try to build it and it does so successfully please let us know actually, an update on whatever you do would be nice. good luck.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I am facing the same problem with you . Have you found the solution ? Where is the APIs for different device
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Use librealsense
It works on Windows, Linux, Mac, and supports all the current RealSense cameras, as well as multiple camera capture using a single computer.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
You can put multiple cameras on Windows. However if the cameras are facing the same way they will interfere with each other's operation.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Well, as any active sensor, you need to either separate the cameras field of view in space or in time.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The R200s will NOT interfere with eachother, you can safely use them together on any of the aforementioned platforms using LibRealSense,
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hey Jonathan, that is not always true.
R200 uses a combination of active and passive sensing.
If you are using active sensing, they WILL interfere with each other. If, on the other hand, you only use the passive IR stereo method, then they will NOT interfere with each other.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The R200 camera does not use any temporal or spatial patterns in its emitter that would interfere with another R200. If you have two R200s pointed at the same object with both of their emitters active, you will not find any interference of the type you'd see if you tried the same with the F200 or the older Creative Senz3D (erratic data, disappearing bands of pixels, pulsing images, etc).
If you have tried this with the R200 and have had issues, then I'd suggest you check your infrared stream where you might see that the two overlapping emitter fields may be over-saturating the IR image on closer object. Simply turning on the LR Autoexposure option will correct that.
I suggest you try this, and post images to this forum, and if you're still having issues then I or one of my RealSense teammates can help you diagnose the issue.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I don't see how that is possible. I'm curious now to see how it would work.
As far as I understand, R200 projects an IR dot pattern into the scene to calculate depth. Similar to how the Kinect 1 works.
If you put a second R200 into the same scene it would project its IR pattern as well, interfering with the first one. It makes sense, and I've seen this interference with two Kinect 1 devices. There are some mechanism to alleviate this, like vibrating one device to make the pattern blurred for the other camera for example, which was presented in a paper by Microsoft itself.
How is the R200 projected pattern different than the Kinect 1 so that it doesn't result in interferences?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The R200 is fundamentally a stereo camera system that uses infrared instead of RGB, so it will work even if the emitter is disabled if there is both enough ambient IR (from sunlight or an incandescent or halogen bulb) AND enough recognizable features that both the IR sensors can identify in the scene.
The emitter's function is to fill in when there isn't enough of either or both, and a second or third emitter just adds more and more texture, which is great up until there's so much IR that it oversaturates (whites out) the sensor. Usually the auto-exposure can be turned on and you're fine.
If you try the R200 outside (though, not the F200), you'll probably find that highly-textured objects like grass or rough concrete show up just fine in the depth map, with or without the emitter turned on, (as long as you remember to turn on auto-exposure again, since sunlight's IR is quite a bit brighter than the sensor's normal expected range), but smooth untextured objects won't have enough natural feature points for the cameras to stereo-match on.
Other 3D cameras may work as you describe, by comparing the detected pattern against an internally-stored version, but this is not how the R200 functions.
Try it!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Got it. I was wrongly assuming how the R200 worked.
Thanks for the explanation. As you said, the R200 is basically an IR stereo depth camera.
And, you're right, because of this, even if you point multiple R200s into the same region, you should still get correct depth information without interference. Great!

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page