Items with no label
Announcements
FPGA community forums and blogs on community.intel.com are migrating to the new Altera Community and are read-only. For urgent support needs during this transition, please visit the FPGA Design Resources page or contact an Altera Authorized Distributor.
3338 Discussions

Help mapping color to point

idata
Employee
4,737 Views

Hello everyone,

This has been asked before, and I believe I am doing the right steps but cannot get it work. I am trying to upgrade from RealSense to RealSense2 using SR300 camera. I am trying to abstract getting a point cloud with color (x, y, z, r, g, b) for other uses. I am trying to map the color to the depth points as follows:

Camera() : pipe(), pipeProfile(), depthScale(0) {

rs2::config cameraConfig;

cameraConfig.enable_stream(RS2_STREAM_DEPTH, 640, 480, RS2_FORMAT_Z16);

cameraConfig.enable_stream(RS2_STREAM_COLOR, 640, 480, RS2_FORMAT_RGB8);

pipeProfile = pipe.start(cameraConfig);

depthScale = pipeProfile.get_device().first().get_depth_scale();

}

void Camera::getCapture() {

rs2::frameset frames = pipe.wait_for_frames();

rs2::depth_frame depthFrame = frames.get_depth_frame();

rs2::video_frame colorFrame = frames.get_color_frame();

rs2_intrinsics depth_intrinsic = depthFrame.get_profile().as().get_intrinsics();

rs2_intrinsics color_intrinsic = colorFrame.get_profile().as().get_intrinsics();

rs2_extrinsics depth_to_color = depthFrame.get_profile().get_extrinsics_to(colorFrame.get_profile());

const uint16_t* depthImage = (const uint16_t*)depthFrame.get_data();

const unsigned char* colorImage = (const unsigned char*)colorFrame.get_data();

for (int dy = 0; dy < depth_intrinsic.height; dy++) {

for (int dx = 0; dx < depth_intrinsic.width; dx++) {

// get depth value at dy, dx

unsigned int i = dy * depth_intrinsic.width + dx;

uint16_t depth_value = depthImage[i];

// if the depth value is 0, we ignore this points.

if (depth_value == 0) {

continue;

}

float depth_in_meters = depth_value * depthScale;

// map the pixel coordinate in depth image to pixel coordinate in color image

float depth_pixel[2] = {(float)dx - 0.5f, (float)dy - 0.5f};

float depth_point[3], color_point[3], color_pixel[2];

rs2_deproject_pixel_to_point(depth_point, &depth_intrinsic, depth_pixel, depth_in_meters);

// if it's out of acceptable range, skip it

if (depth_point[2] > maxAcceptDistance) {

continue;

}

rs2_transform_point_to_point(color_point, &depth_to_color, depth_point);

rs2_project_point_to_pixel(color_pixel, &color_intrinsic, color_point);

// get closest color pixel obtained from calculations and use as the point's color, or white if ouside color image

const int cx = static_cast(color_pixel[0] + 0.5f);

const int cy = static_cast(color_pixel[1] + 0.5f);

// if outside intrinsic parameters, skip the point

if (cx < 0 || cy < 0 || cx > color_intrinsic.width || cy > color_intrinsic.height) {

continue;

}

const unsigned char* offset = static_cast(colorImage + (cy * color_intrinsic.width + cx) * 3);

float x = -1 * depth_point[0];

float y = -1 * depth_point[1];

float z = depth_point[2];

unsigned char r = *(offset);

unsigned char g = *(offset + 1);

unsigned char b = *(offset + 2);

}

}

}

I only left the redundant parts. Basically, in the end, (r, g, b) are 0 (or they appear black in the image). Any suggestions?

PS. (Code editor is weird).

0 Kudos
1 Solution
idata
Employee
3,735 Views

For anyone that may read this, I had to swap the map_to and calculate statements. However, you also need to get some captures. do as follows:

rs2::pointcloud pc;

rs2::points points;

for (int i = 0; i < 10; i++) pipe.wait_for_frames();

rs2::frameset frames = pipe.wait_for_frames();

rs2::depth_frame depth = frames.get_depth_frame();

rs2::video_frame color_frame = frames.get_color_frame();

pc.map_to(color_frame);

points = pc.calculate(depth);

int width = color_frame.get_width();

int height = color_frame.get_height();

const rs2::vertex* ptr = points.get_vertices();

const rs2::texture_coordinate* tex_coords = points.get_texture_coordinates();

const unsigned char* colorStream = static_cast<</span>const unsigned char*>(color_frame.get_data());

for (int i = 0; i < points.size(); i++) {

int x = static_cast(textureCoordinates[i].u * width);

int y = static_cast(textureCoordinates[i].v * height);

if (y < 0 || y >= height) {

continue;

}

if (x < 0 || x >= width) {

continue;

}

if (rs_vertices[i].z <= 0 || rs_vertices[i].z > maxAcceptDistance) { // Rule out values further than this value

continue;

}

int colorLocation = (y * width + x) * 3;

unsigned char color[3] = {colorData[colorLocation], colorData[colorLocation +1], colorData[colorLocation +2]}; // RGB

 

float coordinates[3] = { ptr[i].x, ptr[i].y, ptr[i].z };

}

View solution in original post

0 Kudos
5 Replies
MartyG
Honored Contributor III
3,735 Views

Have you seen the SDK 2.0 example for creating a point cloud?

https://github.com/IntelRealSense/librealsense/tree/master/examples/pointcloud librealsense/examples/pointcloud at master · IntelRealSense/librealsense · GitHub

0 Kudos
idata
Employee
3,735 Views

Hello MartyG,

I did see the example, but I couldn't figure out how to use the (u, v) values to obtain the (r, g, b) values from the color frame. Any help with that? Thanks.

0 Kudos
MartyG
Honored Contributor III
3,735 Views

I'll link RealSense stream programming expert jb455 into this discussion to seek his wide knowledge on the subject.

0 Kudos
idata
Employee
3,735 Views

So I know that once you do

rs2::pointcloud pc;

rs2::points points;

rs2::frameset frames = pipe.wait_for_frames();

rs2::depth_frame depth = frames.get_depth_frame();

rs2::video_frame color_frame = frames.get_color_frame();

points = pc.calculate(depth);

pc.map_to(color_frame);

Then, you can get the texture coordinates that correspond to each vertex:

const rs2::vertex* ptr = points.get_vertices();

const rs2::texture_coordinate* tex_coords = points.get_texture_coordinates();

const unsigned char* colorStream = static_cast(color_frame.get_data());

Each texture coordinate has (u, v) value which ranges from [0-1] that can be mapped to the color frame. I did this:

int x = tex_coords[i].u * color_intrinsic.width;

int y = tex_coords[i].v * color_intrinsic.height;

int colorLocation = y * color_intrinsic.width + x;

unsigned char color[3] = {colorStream[colorLocation], colorStream[colorLocation +1], colorStream[colorLocation +2]};

However, everything comes back as a single color. I stepped through and it seems both u and v are always 0.

0 Kudos
idata
Employee
3,736 Views

For anyone that may read this, I had to swap the map_to and calculate statements. However, you also need to get some captures. do as follows:

rs2::pointcloud pc;

rs2::points points;

for (int i = 0; i < 10; i++) pipe.wait_for_frames();

rs2::frameset frames = pipe.wait_for_frames();

rs2::depth_frame depth = frames.get_depth_frame();

rs2::video_frame color_frame = frames.get_color_frame();

pc.map_to(color_frame);

points = pc.calculate(depth);

int width = color_frame.get_width();

int height = color_frame.get_height();

const rs2::vertex* ptr = points.get_vertices();

const rs2::texture_coordinate* tex_coords = points.get_texture_coordinates();

const unsigned char* colorStream = static_cast<</span>const unsigned char*>(color_frame.get_data());

for (int i = 0; i < points.size(); i++) {

int x = static_cast(textureCoordinates[i].u * width);

int y = static_cast(textureCoordinates[i].v * height);

if (y < 0 || y >= height) {

continue;

}

if (x < 0 || x >= width) {

continue;

}

if (rs_vertices[i].z <= 0 || rs_vertices[i].z > maxAcceptDistance) { // Rule out values further than this value

continue;

}

int colorLocation = (y * width + x) * 3;

unsigned char color[3] = {colorData[colorLocation], colorData[colorLocation +1], colorData[colorLocation +2]}; // RGB

 

float coordinates[3] = { ptr[i].x, ptr[i].y, ptr[i].z };

}

0 Kudos
Reply