Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted
Beginner
12 Views

Specifying image resolution/fps using EnableStreams

Jump to solution

For my augmented reality application, I'd like to stream synchronously color and depth pictures from the camera. The recommended method one finds in the user guide is to use EnableStreams() and StreamFrames(), something like

PXCMSenseManager sm=PXCMSenseManager.CreateInstance();
 
// Select the color and depth streams
PXCMVideoModule.DataDesc ddesc=new PXCMVideoModule.DataDesc();
ddesc.deviceInfo.streams=PXCMCapture.StreamType.STREAM_TYPE_COLOR|PXCMCapture.StreamType.STREAM_TYPE_DEPTH;
sm.EnableStreams(ddesc);
 
// Initialize my handler
PXCMSenseManager.Handler handler=new PXCMSenseManager.Handler();
handler.onNewSample=OnNewSample;
sm.Init(handler);
 
// Streaming
sm.StreamFrames(true);

However, as my research shows, I need to be pretty much specific about the resolution and fps of the camera. It's clear how to achieve this for asynchronous streams, since the EnableStream method is pretty much specific e.g. one writes something like:

sm.EnableStream(PXCMCapture.StreamType.STREAM_TYPE_COLOR,640,480,30);

But how do I do a similarly exhaustive stream description using EnableStreams()? Unfortunately, I haven't been able to find any example where something like that was done.

0 Kudos

Accepted Solutions
Highlighted
Valued Contributor II
12 Views

To answer your original question, you can set the stream parameters in the DataDesc object. So in the sample code above you'd have to add something like this:

ddesc.streams[PXCMCapture.StreamType.STREAM_TYPE_COLOR].frameRate.min = ddesc.streams[PXCMCapture.StreamType.STREAM_TYPE_COLOR].frameRate.max = fps;
ddesc.streams[PXCMCapture.StreamType.STREAM_TYPE_COLOR].sizeMin.height = ddesc.streams[PXCMCapture.StreamType.STREAM_TYPE_COLOR].sizeMax.height = height;
ddesc.streams[PXCMCapture.StreamType.STREAM_TYPE_COLOR].sizeMin.width = ddesc.streams[PXCMCapture.StreamType.STREAM_TYPE_COLOR].sizeMax.width = width;

And the same for STREAM_TYPE_DEPTH and whatever other streams you want to use.

View solution in original post

0 Kudos
4 Replies
Highlighted
New Contributor I
12 Views

In my experience the asynchronous data was pretty fast. What happens if you just feed asynchronous data to your system?

Also read this https://software.intel.com/sites/landingpage/realsense/camera-sdk/v1.1/documentation/html/index.html?doc_essential_strong_synchronization.html

0 Kudos
Highlighted
Beginner
12 Views

Thanks Johannes. Perhaps I've had to be more specific indeed. 

The idea was to highlight the objects on the color picture basing on the depth data. For that, need to align color and depth images, and  it looks reasonable to do that using the CreateDepthImageMappedToColor method. My code needs to be event-driven - I am bound by the architecture of a large project and my program also achieves events from other sensors. Now, the cited method works only when the achieved sample has both color and depth images, hence - synchronization. When I use unaligned streams - as I did before and as it is done in the example you've referenced - an event will yield a sample containing a single image - color or depth - and the method will not work.

Perhaps, the problem may be bypassed asynchronously by doing something like querying the vertices first, saving them in an auxiliary field, and then using MapDepthToColor to create something like my very own array of 2D-points, then draw my very own depth picture and work on it to make a highlight. But I find such a solution utterly ugly.

0 Kudos
Highlighted
New Contributor I
12 Views

Hello Nikolay,

I am not sure if I understand it right, but is it possible to make two depth images in your bigger project you are bound to? One raw image and one mapped image. If your event triggers you can take any of the images from a shared memory or global space?

I have an R200 and the projection method (neither of them, map to color or map to depth) produces good output. I once saw someone using a mixed method of camera calibration for mapping depth and camera images. As far as i remember he used intrinsic calibration on both cameras to make them see the same field of view. After that he made a distortion pattern manually with printed chess boards.

This is some time ago and I cant remember any references :(

0 Kudos
Highlighted
Valued Contributor II
13 Views

To answer your original question, you can set the stream parameters in the DataDesc object. So in the sample code above you'd have to add something like this:

ddesc.streams[PXCMCapture.StreamType.STREAM_TYPE_COLOR].frameRate.min = ddesc.streams[PXCMCapture.StreamType.STREAM_TYPE_COLOR].frameRate.max = fps;
ddesc.streams[PXCMCapture.StreamType.STREAM_TYPE_COLOR].sizeMin.height = ddesc.streams[PXCMCapture.StreamType.STREAM_TYPE_COLOR].sizeMax.height = height;
ddesc.streams[PXCMCapture.StreamType.STREAM_TYPE_COLOR].sizeMin.width = ddesc.streams[PXCMCapture.StreamType.STREAM_TYPE_COLOR].sizeMax.width = width;

And the same for STREAM_TYPE_DEPTH and whatever other streams you want to use.

View solution in original post

0 Kudos