Software Archive
Read-only legacy content
17061 Discussions

Multiple RealSense Cameras

Wooram_S_
Beginner
2,292 Views

Hello,

I need following configuration. I tested it.

- 2 or 3 RealSense cameras connected to one PC

- Simultaneously, using only one RealSense camera

But, I can't switch cameras. I can access only one camera. (maybe first connected camera)

It's not possible to switch cameras?

Thanks

 

 

0 Kudos
13 Replies
MartyG
Honored Contributor III
2,292 Views

That's a question I would love a question to, as it would be much easier to control a full-body avatar if one camera could watch the face, one watch the hands and another watch the feet (as RealSense can recognize the feet and knee joints and treats them like fingers and palm).  You could do it with only 2 cams, one for both face and hands, but the hands tend to block the face when lifting them).

If it were possible to have more than one camera used by an application, I speculate that it would be easier to do in a game creation engine like Unity.  Unity has an Input Manager interface where you can define up to 80 different control inputs from multiple USB controllers attached to the computer like joypads and steering wheels.  It might take some custom programming to get Unity to recognize the camera as a joystick-type controller, but it's within the realms of possibility.

Another approach might be to use C++ or C# code to enable and disable USB ports so that only 1 cam at a time is active.  There's plenty of info on this if you google something like 'c++ disable usb port'. 

It seems that Unity can at least use multiple cameras to switch between basic webcam video feeds (i.e just the view the camera is seeing.).

http://docs.unity3d.com/ScriptReference/WebCamTexture.html

0 Kudos
samontab
Valued Contributor II
2,292 Views

I tried it, the sdk only allows access to one camera. Someone from Intel answered that a similar question I asked...

0 Kudos
MartyG
Honored Contributor III
2,292 Views

I have something that may add useful insight to the multi-camera debate.  I made a model in Unity of my squirrel-guy avatar using a 3D-printed super-hero mask with a custom-built arrangement of RealSense hardware inside.

http://test.sambiglyon.org/sites/default/files/rsmask.jpg

The basic premise was that this character had taken apart a couple of the forthcoming RealSense tablets and taken the camera circuit-boards out of it and used them in his mask.  The rear camera inside the mask would take readings of the wearer's eye expressions inside the mask and project it onto digital eye displays on front of the mask that had a different eye color to the wearer, to help disguise his real identity.

The second camera on the front of the mask filmed live video of what it was seeing and displayed it on a smartphone screen attached to the camera (as a replacement for the bulkier tablet screen that would fit inside the mask casing) and allow the wearer to see the environment around them, as the mask was a sealed unit that couldn't be looked out of.  Both cameras were powered by a smartphone battery taped next to the screen.  

Access to the mask was gained via runner rods that allowed the top half of the mask to be lifted up, revealing the RealSense electronics inside.  The wearer could put the mask on by lifting it up to the face and putting their head forward into it (as it was open at the back whilst the upper part was lifted) and then press a secret button on the mask to drop the upper section down over the face, covering the electronics and the wearer's head.

0 Kudos
Vidyasagar_MSC
Innovator
2,292 Views

Hi Marty,

Looks like a new experiment from you. Out of enthusiasm.What are you trying to do with multiple RealSense cameras? and why do you require multiple cameras?

 

0 Kudos
MartyG
Honored Contributor III
2,292 Views

Well it was more of a game storyline exercise than something I was trying to build, where the character in the story world powered by RealSense technology was using RealSense in his own inventions.  I believe the word for such a thing would be "meta."  :)

Another example I came up with in the storyline was connecting the cameras to limb augmentation motors for a paralyzed person who had a little hand motion, so that by moving their hand a little, the camera could convert the finger motions into commands to the motors to be able to control a full artificial arm in the same way that my avatar tech converts small hand inputs to the camera into complex arm motions.

The camera(s) could be worn on a belt, with an upward-pointing camera that looked at the hands being used to power motorized upper-body mechanisms, and a downward-pointing camera reading toe motions and converting them into movements of motors attached to the legs.  For someone who does not have feeling in the feet, an arm on the upper body could be controlled with one hand and the legs / feet could be moved with motions of the other hand .

The current RealSense SDK recognizes the feet as a hand, allowing objects to be controlled with the toes by assigning them to the hand controls (like Index Joint 1) and waving your foot in front of the camera with your hand behind your back or dropped out of tracking view so the camera doesn't get confused.  

I read a story in the tech mag Wired recently about how Intel Labs worked with Professor Stephen Hawking to upgrade his wheelchair's computer tech to help him work easier, so that was an inspiration to me too.

0 Kudos
Wooram_S_
Beginner
2,292 Views

I connected three RealSense sensors to my host PC to use for one application program.

I wonder if I can select one of them to be activated and used by my application.

I don't suppose to use all sensors simultaneously now.

To be specific, I used Intel Perceptual SDK 2013 + Senz3D before and It was available to access a few sensors at same time.

When I try the same application with Intel RealSense model, It is not possible anymore.

This is because I cannot upgrade sensors to RealSense.

Is there any plan to fix SDK for this issue?

0 Kudos
Seung-hwa_S_
Beginner
2,292 Views

I agree with that RealSense should support use of multiple devices.

I cannot understand why it is not available for latest version. :(

Should we crack device drivers?

0 Kudos
MartyG
Honored Contributor III
2,292 Views

Hi Seung-Hwa,

I don't think you need to "crack" the camera drivers (which makes it sound as though you are doing something wrong), as Intel already support developers making their own custom algorithms for the camera software.  :)

In fact, an Intel article from 4 days ago makes reference to this, saying 

"Of course, a more “user-friendly” system comes at the cost of granular control.  Developers have a lot less access to raw data in the Intel RealSense SDK and customizing processing algorithms is no longer a simple matter.  In the end, though, the Intel RealSense SDK is a major improvement over Intel Perceptual Computing at basically every level. And while the nerdcore coder in us miss the unfettered data stream, the deadline-oriented coder is grateful for the improved level of accessibility and productivity."

https://software.intel.com/en-us/articles/to-realsense-w-unity

So whilst making your own camera alrorithm may not be straightforward, it is likely still possible for a capable coder.

0 Kudos
Seung-hwa_S_
Beginner
2,292 Views

Hi, Marty.

Thanks for your reply.

I know Intel is doing the best for SDK and developers and my cracking word was kind of joke because I am stuck in problem with RealSense.

However, what I cannot understand is why Intel blocked accessing to multiple devices in one host computer though.

I think limiting the number of sensors is like limiting opportunity of developer's creativity like your project.

I am considering disabling each device as you mentioned.

0 Kudos
MartyG
Honored Contributor III
2,292 Views

I think the simplest approach to using multiple cameras without writing a new algorithm or switching the USB ports on-off would be to have a separate PC for each camera and have the output of that camera (e.g a camera controlled avatar) logged into an online "room" environment built in something like Unity.  Then you can merge together each camera-controlled element on a single screen.

Here's a video of a RealSense tech demo I built in Unity yesterday of two hugging and kissing full-body avatars controlled with a single camera.  The slight awkwardness of it (even taking into account the still-glitchy collision detection between the avatars) highlights what a difference allocating a specific element to its own camera could make.

https://www.youtube.com/watch?v=2IrgwPdgK-g&feature=youtu.be

0 Kudos
Seung-hwa_S_
Beginner
2,292 Views

I thought distributed system like your idea but it does not worth it for my application. It costs a lot and too complicated. My system should be light and look fancy for customers. Anyway, thank you. If I find any way or idea for this, I will share here.

0 Kudos
PKusm
New Contributor I
2,292 Views

Hi, 
Is there anybody who has managed to use multiple RealSense cameras for one application?

The current SDK (5.0.3.7777) documetation says that, "It is possible to create multiple instances of PXCMSenseManager interface to work with different cameras." Does it means that we can use multi cameras simultaneously?

 

0 Kudos
samontab
Valued Contributor II
2,292 Views

Well, of course it is possible given the usage of 6 realsense cameras in the ASCtec Drone demo. But, it seems that the public SDK does not have this ability.

0 Kudos
Reply