Items with no label
3335 Discussions

d435 depth camera

EAlta
Beginner
6,594 Views

How can I get the depth (in meters for example) of each pixel of the depth image, given by the realsense d435 depth camera, obtained by the RealSense Viewer?

0 Kudos
26 Replies
MartyG
Honored Contributor III
567 Views

The link below suggests that you could load your ply file into the MeshLab software (sometimes used to convert RealSense 2.0's ply pointclouds to solid-mesh 3D models) and convert it to a depth map with the 'depthmap' shader.

https://stackoverflow.com/questions/42754843/ply-polygonal-mesh-to-2d-depth-map python - .ply polygonal mesh to 2D depth map - Stack Overflow

0 Kudos
EAlta
Beginner
567 Views

MartyG jb455

That could be an option, but I would like to try with the sdk's functions. I alrready have the intrinsic values but I dont really understand what does the ""https://github.com/IntelRealSense/librealsense/blob/5e73f7bb906a3cbec8ae43e888f182cc56c18692/include/librealsense2/rsutil.h%23L15 rs2_project_point_to_pixel(...)" function return. In the documentation, is said tha it converts a 3d point into a 2d image, but is it a depth image? and Do I have to introduce every point (x,y,z) independently?

0 Kudos
MartyG
Honored Contributor III
567 Views

JB will be better able to offer advice on this question, as RealSense stream programming is not one of my specialist areas, unfortunately. Good luck!

0 Kudos
jb455
Valued Contributor II
567 Views

That function projects a single 3D (xyz) point to a (uv) pixel in the depth image. The `point` parameter you pass will be an empty point object which is filled in by the function.

You'd probably want to start with a float array the size of `depth.width*depth.height` initialised with zeroes, then loop through your point cloud (xyz), apply the function on each point and then fill in the `(u + v*depth.width)`th element of your float array with the z value of the point. Then when you're finished the array will be filled in with all the depth values, and any zeroes that are left will represent points that had no depth data. You can then process that however you want. If you want an image you'll probably need to use OpenCV or something to visualise the depth data.

0 Kudos
EAlta
Beginner
567 Views

Thank you jb455

Im going to apply the algorithm you propose.

What are the "u" and "v" values that returns the projection function?

I applied sdk´s function "rs.rs2_project_point_to_pixel()" with the following inputs:

- Depth intrinsic values: width: 424, height: 240, ppx: 214.063, ppy: 120.214, fx: 213.177, fy: 213.177, model: Brown Conrady, coeffs: [0, 0, 0, 0, 0]

- point to convert (from pointcloud): [-0.968, -0.582, 1.03]

The output from function is: [13.718368530273438, -0.24103546142578125]

0 Kudos
jb455
Valued Contributor II
567 Views

`u` and `v` are the `x` and `y` coordinates in the depth image, which are `point[0]` and `point[1]` (the output values). You'll probably need to round the values and cast as `int` so you can get/set items in the array.

0 Kudos
Reply