Items with no label
3335 Discussions

filtering out invalid depth values?

drmatt
Beginner
3,033 Views

Hello,

 

We are using a realsense d415 for close-range sensing (on the order of 20-30cm). With our current settings, the objects of interest appear fine, but the rest of the image is filled with nonsense values for the more distant background. Though these values are completely wrong (perhaps wrapped?), they are consistent over time.

 

image.png

 

In this example, we used all the default camera settings, but set Advanced Controls -> Depth Table -> Disparity Shift to 170. The effect seems similar to running with the default settings and looking at objects too close to the camera.

 

Is there a way to filter out these bad depths? Or, is there a way to tell which pixels in the depth image are out of range, or correspond to wrapped disparity values, or are otherwise invalid?

 

Thanks in advance.

 

0 Kudos
1 Solution
MartyG
Honored Contributor III
2,432 Views

Your mention of confidence maps reminded me that I recently answered a question where a user asked about a confidence map. In that discussion, Dorodnic the RealSense SDK Manager said that the SR300 camera model (which uses Structured Light camera technology) supports confidence maps, but the 400 Series (which is Stereo technology) does not, because "stereo is relying on matching between pixels. There either is a match or there isn't, the best pixel is always matched with '100% confidence'."

 

He added, "The only related parameter for D400 I can think of is the Second Peak Threshold under Advanced Mode. This parameter will disqualify depth pixel, if the "second best" match is too close to the best match. Practically, however, this does not function exactly like a confidence threshold".

 

The full discussion can be found at the link below.

 

https://github.com/IntelRealSense/librealsense/issues/3185

View solution in original post

0 Kudos
8 Replies
MartyG
Honored Contributor III
2,432 Views

The Disparity Shift setting reduces the camera's minimum distance as it is increased (meaning the camera can get closer to objects before the image starts to break up as the object passes under the minimum distance (MinZ) of the camera. Doing so also reduces the maximum distance (MaxZ) that the camera can see, so you see the background progressively breaking up and consuming more and more of the foreground detail as the maximum distance reduces.

 

Also in the Depth Table section of the RealSense Viewer program, you can alternatively adjust the 'Depth Clamp Max' slider to reduce the maximum observable depth.

 

There was also a user who dealt with invalid pixels by giving them a value of '0' instead of removing them.

 

https://forums.intel.com/s/question/0D70P0000068dkcSAA

0 Kudos
drmatt
Beginner
2,432 Views

Hi @MartyG​ and thanks for your reply. Regarding disparity shift - you're right, we set that to a higher value in order to see objects closer to the camera for our application. We'd like to set the invalid pixels to zero, as you suggest, but the challenge is identifying which ones are invalid. In the first attached example, the non-zero depth pixels in the upper 1/3 of the image have "valid" values, meaning that they're in the same depth range as the true object in the foreground, but those depth values are not correct - they make the background objects appear like they're in the foreground, and when drawn in 3d they look like random blobs hanging in the air near the actual object. Do you have any suggestions about distinguishing 'good' from 'bad' disparity / depth readings?

 

0 Kudos
MartyG
Honored Contributor III
2,432 Views

I would suggest trying a post-processing filter to adjust the pixels, such as a Temporal filter (which can make correction decisions on missing or invalid pixel data).

 

https://github.com/IntelRealSense/librealsense/blob/master/doc/post-processing-filters.md#temporal-filter

 

In the RealSense Viewer, you can easily apply post-processing filters by expanding open the 'Post-processing' option in the side-panel. Once you have enabled (with a blue icon) the post-processing filters that you want to apply, and turned off (with the red icon) the filters that you do not want applied, set the icon beside 'Post-processing' to blue to apply the filter settings in the list.

 

1.png

 

 

The images below show a scene with the Temporal Filter turned off, and then the filter turned on in the second image. You can see how some of the holes in the second image are filled in with the Temporal filter active.

 

2.png

 

 

As the documentation describes, you can select a 'Persistency' setting from a drop-down menu in the Temporal Filter section of the Viewer controls to define how you want pixel correction to be handled.

 

1.png 

The 'Hole-filling' post processing filter can also do pixel corrections, with settings such as having invalid pixels use the value of a neighboring pixel.

0 Kudos
drmatt
Beginner
2,432 Views

Unfortunately post-processing doesn't work, because the bad values are temporally persistent and "look" like valid values. We're not aiming to fill holes, it's sort of the opposite - we're trying to get rid of the bad pixel fragments.

 

To illustrate a little more clearly, here's a similar example with an attempt at showing 3d views:

 

image.png

 

image.png

 

The blue values in the depth image are zero, and we can deal with those. It's the small and medium sized fragments of non-zero pixels in the upper 1/3 of the depth image that are problematic: they are within the same depth range as the good pixels, but they lead to non-existent 3d fragments floating just above the objects of interest.

0 Kudos
MartyG
Honored Contributor III
2,432 Views

I can understand the wish to avoid post-processing in situations where you want to get the truest representation possible of the image. It is also why some users cannot use a disparity shift to improve their image. So I can definitely relate to the issue that you face.

 

It occurred to me that a lot of the fragments are like floating islands that are not joined to the main body of the scan, and that made me wonder if it would be possible to use an algorithm to remove all 'unconnected' pixels that are not joined to another pixel. My research found a couple of non-RealSense examples where other people had this fragmentation problem, and recommended cleaning the image up with an algorithm to remove the largest 'blobs' using the computer vision software MATLAB and its bwareaopen function.

 

https://uk.mathworks.com/matlabcentral/answers/113073-how-to-remove-unconnected-pixels-or-objects-from-an-image

 

https://www.researchgate.net/post/Is_it_possible_to_filter_out_not_connected_pixels_with_a_regular_pattern_of_similar_width_area_in_binary_image_MATLAB2

 

If you are using Windows, you can connect the RealSense SDK to MATLAB with the SDK's MATLAB wrapper interface.

 

https://github.com/IntelRealSense/librealsense/tree/master/wrappers/matlab

0 Kudos
drmatt
Beginner
2,432 Views

@MartyG​ Once again thanks for the suggestion. There may be some things we can do via custom post-processing to help. Removing small connected components as you suggested is one possibility, though in many cases we'd end up removing a lot of the object of interest too; and some of the false islands are actually pretty large.

 

We were hoping that there were driver settings or other information (like confidence maps?) that could be pulled from the camera itself so that such post-processing wouldn't be necessary. The artifacts seem to occur on background objects where the depths are outside the disparity range we've set, yet they are assigned depths in the valid range. This could be due to some kind of wrapping or aliasing of the projector pattern, or perhaps some other reason.

 

The images as they are now are, unfortunately, unusable for our application, so it sounds like we need to rethink our sensor setup.

 

In any case, I appreciate your continued help and responsiveness!

0 Kudos
MartyG
Honored Contributor III
2,433 Views

Your mention of confidence maps reminded me that I recently answered a question where a user asked about a confidence map. In that discussion, Dorodnic the RealSense SDK Manager said that the SR300 camera model (which uses Structured Light camera technology) supports confidence maps, but the 400 Series (which is Stereo technology) does not, because "stereo is relying on matching between pixels. There either is a match or there isn't, the best pixel is always matched with '100% confidence'."

 

He added, "The only related parameter for D400 I can think of is the Second Peak Threshold under Advanced Mode. This parameter will disqualify depth pixel, if the "second best" match is too close to the best match. Practically, however, this does not function exactly like a confidence threshold".

 

The full discussion can be found at the link below.

 

https://github.com/IntelRealSense/librealsense/issues/3185

0 Kudos
drmatt
Beginner
2,432 Views

We tried your suggestion, and unfortunately, changing the Second Peak Threshold does not really help. There are other parameters that can remove the artifacts when changed, but they also remove much of the true objects as well. So it seems there is no simple way to distinguish between true and false disparities.

Thanks once again.

0 Kudos
Reply