I recently bought a intel real sense camera. in the sample projects under RSSDK, the project called "DF_CameraViewer" the output from the camera and deapth are of different field of view.
I am trying to get it to the same field of view. how can it be done??
(Note : I tried the function called CreateColorImageMappedToDepth by passing the depth and color as arguments but did not get color image that matches the deapths Field of view.)
What do you get when you use CreateColorImageMappedToDepth? The DF_Projection sample uses this method to obtain a colour image the size and shape of the depth image such that an (x,y) point on the new image will refer to the same point in the world as the same (x,y) point on the depth image. Is that not what you want?
the output I got was mapped but not the (x,y) equivalent matching with depth.
Any idea about the format that CreateColorImageMappedToDepth takes? I have passed data in PIXEL_FORMAT_RGB32
The docs state "Each input color image pixels are mapped to the output color image.", so I guess that means it's the same pixel format as the input image (same pixels, just returned in a different order).
I've only used the converse method (CreateDepthImageMappedToColor), though to get at the depth data within, not to display the image. The following (C#) code gives me an array of depth values mapped to the colour image (so I can point to a pixel in the colour image and get the depth of that point):
PXCMImage.ImageData mdata; var mappedImage = projection.CreateDepthImageMappedToColor(depth, image); mappedImage.AcquireAccess(PXCMImage.Access.ACCESS_READ, PXCMImage.PixelFormat.PIXEL_FORMAT_DEPTH, out mdata); mwidth = mappedImage.info.width; var mheight = mappedImage.info.height; mpitch = (Int32)(mdata.pitches / 2.0); mappedPixels = mdata.ToShortArray(0, mwidth * mheight); mappedImage.ReleaseAccess(mdata); mappedImage.Dispose();
I'd imagine you would do something similar to get the colour pixels.