- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Most of the SDK samples get and display the depth in the PIXEL_FORMAT_RGB32 pixel format. I've always used the PIXEL_FORMAT_DEPTH format since that seems more appropriate.
What exactly am I getting if I ask for the depth image in PIXEL_FORMAT_RGB32? In the samples it seems kind of like an inverted distance with 1 right up close to the camera and 0 being some distance away. Is that what it is, or something else?
Thanks
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Malcolm,
From the documentation, you can see that PIXEL_FORMAT_DEPTH gives you 16-bit depth information, and that PIXEL_FORMAT_DEPTH_F32 gives you 32-bit depth information. Both in mm. This is actually what you want, as you already know.
I assume that if you, not as appropriately as you noted, use PIXEL_FORMAT_RGB32 with the depth stream, you may get 32-bit raw depth information, but I haven't checked this directly. In my opinion it should give you an error such as "Invalid format". If it actually gives you the raw information, it should be similar to PIXEL_FORMAT_DEPTH_RAW, but in 32 bits instead of 16-bits.
If that's the case, those numbers encode the distance of each object, in a device specific manner. Similar to the disparity in a stereo system. Objects that are far away will have lower disparity numbers, and closer objects will have higher disparity numbers. But this is not really that helpful in general. Just use PIXEL_FORMAT_DEPTH_F32 for high resolution depth information.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Well the reason I ask is because my end-user wants the depth as it shows up in the SDK samples, which is PIXEL_FORMAT_RGB32, so I'm looking to understand what it is exactly.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Malcom,
Try to use the following function: https://software.intel.com/sites/landingpage/realsense/camera-sdk/v1.1/documentation/html/querydepthunit_device_pxccapture.html
Regards,
Felipe
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hey Felipe,
I'm unsure how that will help me. The documentation says that's used for PIXEL_FORMAT_DEPTH_RAW. How does RGB32 relate to that format? Is it just a 32-bit version where I treat the entire 32-bits as a 32-bit unsigned int, instead of 16-bit ints I get from PIXEL_FORMAT_DEPTH_RAW?
Can you elaborate?
Thanks
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
My guess would be that it is a 32-bit floating point version of PIXEL_FORMAT_DEPTH_RAW.
The actual numbers would be the displacement of the projected patterns in the scene. That's why you see larger numbers for objects that are closer, and smaller numbers for objects that are farther away. Now, the units of those numbers can be retrieved with QueryDepthUnit.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
So I just did a test and it seems like the pixels have the values (X, X, X, 255), where X is this undocumented representation for depth. So it's not a 32-bit value, it's an 8-bit value repeated 3 times followed by a 255 for the padding/alpha channel.
So no solution yet as to what these values represent.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I was hopping it was an 8-bit version of PIXEL_FORMAT_DEPTH_RAW, but that's not it either as that is 0= close to camera and 1= far from camera.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Malcom,
The RGB32 format is used to display the image only. The Depth format returns you the raw data that the sensor is reading from the environment, which is more suitable if you are developing an app that is doing measurements or needs to check the depth for any reason. The function that I sent to you is a way to get the measurement unity that the values of each pixel are being represented.
Regards,
Felipe
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hey Felipe,
Yes I understand that. However some users will want to use this 'display only, RGB32, depth image', especially since that's all they ever see in the SDK samples (I'm adding support for it specifically based on a user request).
I want to document what it means for them. What does a pixel value of 1 mean, what does a pixel value of 0 mean? It seems like it's doing a dynamic re-range of the pixel values based on the overall scene depth, and that's fine. I just want to be able to document it.
Meghana is also trying to answer this question for me over email, so you may want to sync up with Meghana to avoid double work in answering this.
Thanks!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Well, it looks to me that it actually may be the 8-bit version of PIXEL_FORMAT_DEPTH_RAW, just mapped from 0..1 into 0..255, and repeated into the other channels.
As I commented before, this number represents a measurement of distance of the lateral shift of the projected pattern.The closer the objects, the more shifted this pattern will be.

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page