Software Archive
Read-only legacy content
17061 Discussions

Camera mapping displacement of RGB camera from depth

Christian_N_
Beginner
865 Views

Is there a known translation between the depth camera and colour camera for the R200?

I registered some depth images via ICP and got a camera pose for each. From this I can texture my 3D model created from the point clouds. However, since I am aligning the depth data this gives me the camera for the depth. I wish to project RGB image onto my model.

I am using PCL and successfully managed to create a textured mesh. However, the images are slightly displaced to where they should be because of the RGB camera position. By translating my cameras by 0.05 I am able to get the image to line up with the mesh horizontally. Vertically (or in the depth direction) there is a small amount of displacement too. Trial and error is not the best method. Does anyone know the displacement or a way to get the amount a bit more scientifically?

Thanks!

0 Kudos
1 Solution
Henning_J_
New Contributor I
865 Views

I am using librealsense rather than the SDK so I do not have access to this function unfortunately

librealsense has its own projection API, so you could use the camera parameters from that to change the RGB image yourself.

But from looking at the C API documentation, you should be able to get the corrected image directly as a stream:

RS_STREAM_COLOR_ALIGNED_TO_DEPTH = 6, /**< Synthetic stream containing color data but sharing intrinsics of depth stream */

That sounds exactly like what you want.

View solution in original post

0 Kudos
7 Replies
kfind1
New Contributor I
865 Views

When you are adding depth values to a point cloud, why not get the RGB mapped coordinate and save the RGB into the cloud point (to make the point an XYZRGB type), there is an example in the online reference: https://software.intel.com/sites/landingpage/realsense/camera-sdk/v1.1/documentation/html/doc_essential_coordinates_mapping.html

The Projection function provides the following mappings or projections:
• Map between color and depth image coordinates: 
• Project among color and depth image coordinates, and world coordinates. 
• Create spatially and size aligned color and depth images. 
Example 40 shows how to map depth to color coordinates using the UV map.
Example 40: Map Depth to Color Coordinates using UV Mapping


// Create the PXCProjection instance.
PXCProjection *projection=device->CreateProjection();

// color and depth image size.
PXCImage::ImageInfo dinfo=depth->QueryInfo();
PXCImage::ImageInfo cinfo=color->QueryInfo();

// Calculate the UV map.
PXCPointF32 *uvmap=new PXCPointF32[dinfo.width*dinfo.height];

projection->QueryUVMap(depth, uvmap);

// Translate depth points uv[] to color ij[]
for (int i=0;i<npoints;i++) {
    ij.x=uvmap[(int)uv.y*dinfo.width+(int)uv.x].x*cinfo.width;
    ij.y=uvmap[(int)uv.y*dinfo.width+(int)uv.x].y*cinfo.height;
}

// Clean up
delete[] uvmap;
projection->Release();

 

 

0 Kudos
jb455
Valued Contributor II
865 Views

For translating "a small number" of pixels (ie, significantly less than the whole image), check out MapDepthToColor.

If you want to translate the whole image, use CreateColorImageMappedToDepth.

0 Kudos
Christian_N_
Beginner
865 Views

kyran f. wrote:

When you are adding depth values to a point cloud, why not get the RGB mapped coordinate and save the RGB into the cloud point (to make the point an XYZRGB type), there is an example in the online reference: https://software.intel.com/sites/landingpage/realsense/camera-sdk/v1.1/d...

 

To clarify, I can already create a colourised point cloud and is not what I am trying to achieve. What I want to do is send an RGB image to PCLs texture mapping function which overlays the image on a 3D model.

0 Kudos
Christian_N_
Beginner
865 Views

James B. wrote:

For translating "a small number" of pixels (ie, significantly less than the whole image), check out MapDepthToColor.

If you want to translate the whole image, use CreateColorImageMappedToDepth.

I am using librealsense rather than the SDK so I do not have access to this function unfortunately. However, wouldn't this shrink my 1920x1080 images to the depth resolution of 640x480?

0 Kudos
Henning_J_
New Contributor I
866 Views

I am using librealsense rather than the SDK so I do not have access to this function unfortunately

librealsense has its own projection API, so you could use the camera parameters from that to change the RGB image yourself.

But from looking at the C API documentation, you should be able to get the corrected image directly as a stream:

RS_STREAM_COLOR_ALIGNED_TO_DEPTH = 6, /**< Synthetic stream containing color data but sharing intrinsics of depth stream */

That sounds exactly like what you want.

0 Kudos
Christian_N_
Beginner
865 Views

Henning J. wrote:

I am using librealsense rather than the SDK so I do not have access to this function unfortunately

librealsense has its own projection API, so you could use the camera parameters from that to change the RGB image yourself.

But from looking at the C API documentation, you should be able to get the corrected image directly as a stream:

RS_STREAM_COLOR_ALIGNED_TO_DEPTH = 6, /**< Synthetic stream containing color data but sharing intrinsics of depth stream */

That sounds exactly like what you want.

 

The alignment would crop my RGB images (as far as I know) which is not preferable for me. I found the part in the projection API under camera extrinsics from depth -> colour it lets me get the translation. Perfect, thanks!

0 Kudos
Lesly_Z_
Beginner
865 Views

How do you set: RS_STREAM_COLOR_ALIGNED_TO_DEPTH ?

while capturing video R200, librealsense in Ubuntu?

0 Kudos
Reply