Software Archive
Read-only legacy content
Announcements
FPGA community forums and blogs on community.intel.com are migrating to the new Altera Community and are read-only. For urgent support needs during this transition, please visit the FPGA Design Resources page or contact an Altera Authorized Distributor.
17060 Discussions

What is the meaning of Realsense depth data?Distance from pixel to the center of the camera or distance of a depth plane?

Alex_X_
Beginner
2,043 Views

I used to use kinect v2,it's depth data means the depth of a vertical plane which the pixel belong to.

Dose Realsense work the same way?

0 Kudos
1 Solution
kfind1
New Contributor I
2,043 Views

From my testing so far, a raw depth value is the distance from the point in the scene to the pixel on the image sensor (or equivalent sensing model, when using stereo vision) . I don't think the raw values are perpendicular to the X-Y plane, only after calibration is that true. Until you calibrate, you have to deal with the camera perspective/projection.

On the R200 I then get the device's calibration info and use a StreamCalibration and the Projection class to convert this raw depth to a proper world coordinate XYZ in floating point millimeter units values. This is still with reference to the camera frame - if you wanted a static world reference you would need to find/make the transform between camera and world frame, especially if the camera is moving.

View solution in original post

0 Kudos
4 Replies
kfind1
New Contributor I
2,044 Views

From my testing so far, a raw depth value is the distance from the point in the scene to the pixel on the image sensor (or equivalent sensing model, when using stereo vision) . I don't think the raw values are perpendicular to the X-Y plane, only after calibration is that true. Until you calibrate, you have to deal with the camera perspective/projection.

On the R200 I then get the device's calibration info and use a StreamCalibration and the Projection class to convert this raw depth to a proper world coordinate XYZ in floating point millimeter units values. This is still with reference to the camera frame - if you wanted a static world reference you would need to find/make the transform between camera and world frame, especially if the camera is moving.

0 Kudos
jb455
Valued Contributor II
2,043 Views

You get the distance form the camera plane, i.e., perpendicular distance, not a straight line from the object to the camera lens.(is that what you meant?)

So if you had a completely flat object exactly perpendicular to the camera, the depth value should be constant across the face of the object (though in reality it's a bit wobbly due to errors)

0 Kudos
samontab
Valued Contributor II
2,043 Views

Depth returned is the Z component of the 3D point with origin in the depth camera's optical center.

In other words, distance to the XY plane, not distance to the center of the camera.

0 Kudos
Alex_X_
Beginner
2,043 Views

Thank you all, it helps a lot. 

0 Kudos
Reply