Items with no label
3335 Discussions

Accuracy in real world x,y,z coordinates

EAnde8
Beginner
5,346 Views

I'm trying to find the "real world" coordinates (cm or mm) for certain well known objects/pixels in an image. I create pointclouds and try to get the coordinates by using the 'points.get_vertices()'.

It seems that the depth/z coordinate works perfectly, but not the x and y... If I for example mark two pixels that I know is 50 mm apart, they usually appear to be 80 mm apart. Does anyone know why this might be? The pointclouds themselves look good.

I use Python 2.7 on Windows 10 if that helps anything

0 Kudos
9 Replies
MartyG
Honored Contributor III
3,747 Views

Back in March there was a case where someone was measuring the distance between two X-Y points that should have been 5 cm but was reading as 7 cm. An Intel support representative suggested using the 'Measure' sample program that comes with the RealSense SDK, and the user reported more accurate results when using Measure.

https://github.com/IntelRealSense/librealsense/issues/1413# issuecomment-375265310 deprojection pixels to 3d point · Issue # 1413 · IntelRealSense/librealsense · GitHub

https://github.com/IntelRealSense/librealsense/tree/master/examples/measure librealsense/examples/measure at master · IntelRealSense/librealsense · GitHub

EAnde8
Beginner
3,747 Views

Thanks, but using their method of calculating the 3d points didn't seem to help, I still get the same results (which i guess I'm supposed to get if I did things correctly before with my point cloud method?).

I can't run their c++ code right now because of reasons, but I translated the calculation into my python script.

0 Kudos
MartyG
Honored Contributor III
3,747 Views

Another point of reference other than the Measure sample is the Python example for measuring boxes. This example is compatible with use of multiple cameras.

https://github.com/IntelRealSense/librealsense/blob/master/wrappers/python/examples/box_dimensioner_multicam/box_dimensioner_multicam_demo.py librealsense/box_dimensioner_multicam_demo.py at master · IntelRealSense/librealsense · GitHub

0 Kudos
EAnde8
Beginner
3,747 Views

Could it simply be a matter of bad calibration? The difference in distance depend a bit on what angle the camera view the images from.

I previously thought that the factoty calibration was bad, so I recalibrated the realsense using the official tool, but that didn't seem to help.

0 Kudos
MartyG
Honored Contributor III
3,747 Views

RealSense cameras are usually well-calibrated when bought. Reasons why a calibration might be needed include whether the camera receives a hard knock or a drop onto the floor.

We do not get many questions about XY accuracy, as the accuracy issues are usually related to Z-depth. You say that you have excellent depth accuracy in your readings though.

There is another recent discussion about measurement in Python in this discussion:

https://github.com/IntelRealSense/librealsense/issues/2343# issuecomment-418112374 histogram equalization · Issue # 2343 · IntelRealSense/librealsense · GitHub

Within that discussion, a Python tutorial called 'Distance to Object' is highlighted.

https://github.com/IntelRealSense/librealsense/blob/jupyter/notebooks/distance_to_object.ipynb librealsense/distance_to_object.ipynb at jupyter · IntelRealSense/librealsense · GitHub

0 Kudos
EAnde8
Beginner
3,747 Views

I seem to get much better results (perfect this far) when using the the intrinsic parameters from the color frame instead of the depth frame, and otherwise following the example in https://github.com/IntelRealSense/librealsense/tree/master/examples/measure librealsense/examples/measure at master · IntelRealSense/librealsense · GitHub .

For anyone else with the same problem, this is how I made a very simple distance function (assuming 'import numpy as np' and 'import pyrealsense2 as rs'):

def get_3d_coords(color_intr, depth_frame, xpix1, ypix1, xpix2, ypix2):

dist1 = depth_frame.get_distance(xpix1,ypix1)

dist2 = depth_frame.get_distance(xpix2,ypix2)

depth_point1 = rs.rs2_deproject_pixel_to_point(color_intr, [xpix1,ypix1], dist1)

depth_point2 = rs.rs2_deproject_pixel_to_point(color_intr, [xpix2,ypix2], dist2)

return np.sqrt(np.power(depth_point1[0] - depth_point2[0], 2) +

np.power(depth_point1[1] - depth_point2[1], 2) +

np.power(depth_point1[2] - depth_point2[2], 2))

I don't yet know if it's something that I just misunderstood the first time, or if it is connected to the fact that I align my depth and color image for a different part of my code. But this change of the intrinsic parameters also fixed my problem with the distance changing with viewing angle!

Thanks for all the very fast answers!

0 Kudos
MartyG
Honored Contributor III
3,747 Views

Awesome news that you found a solution - thanks so much for sharing your technique and code!

0 Kudos
jb455
Valued Contributor II
3,747 Views

This was a bug which was supposed to have been fixed in 2.16; what version are you using?

https://github.com/IntelRealSense/librealsense/issues/2002 Aligned Point Cloud using wrong intrinsics? · Issue # 2002 · IntelRealSense/librealsense · GitHub

0 Kudos
EAnde8
Beginner
3,747 Views

As far as I can see I have version 2.12, so that would be before the bug fix then.

0 Kudos
Reply