Items with no label
3338 Discussions

Issues with custom disparity generation

PLore1
Beginner
14,133 Views

Does anyone have experience generating custom disparity maps with the D400-series sensors? I am using a D415 at the moment, and while the 3D results are OK, they do seem quite wavy (see: https://github.com/IntelRealSense/librealsense/issues/1375 Very wavy cloud · Issue # 1375 · IntelRealSense/librealsense · GitHub). In my quest to get better results, I have been experimenting with OpenCV's StereoBM and StereoSGBM in order to generate a disparity map from the left/right IR images recorded from the D415 in 1280x720 resolution. However, when I convert the disparity values into a depth map, I am getting what looks to be exponential scaling in the Z direction. See the images below - the first 3D mesh is generated from the depth map provided by the sensor, while the second one is generated from the StereoBM disparity map. The code I am using to convert from the depth map to 3D points is identical in both cases.

For reference, here's what the disparity map looks like:

Here are the relevant values I am using to convert from disparities to depth, retrieved from the IR intrinsics/extrinsics:

cX = 629.8552856445313

cY = 358.5696105957031

fX = 938.2886352539063

fY = 938.2886352539063

baseline = 0.054923441261053085

I am converting from disparity to depth using the following formula:

depth = (fX * baseline) / disparity

Note that the disparity value is also divided by 16 due to the data format returned by StereoBM.

I have also tried creating a Q matrix based on the values above so that I can use cv::reprojectImageTo3D() - results were (unfortunately) the same.

From experimenting with these values, I suspect the issue is with generating the disparity map, and not the conversion afterwards. It is my understanding that the IR images are already rectified, and lens distortion has been removed (see: https://github.com/IntelRealSense/librealsense/wiki/Projection-in-RealSense-SDK-2.0# d400-series Projection in RealSense SDK 2.0 · IntelRealSense/librealsense Wiki · GitHub). Is there some other secret sauce going on here that I'm missing? Maybe lens distortion that's not accounted for?

I realize that I can do a custom calibration myself, but I'd really like to avoid that step if possible. Any help would be greatly appreciated.

0 Kudos
30 Replies
PLore1
Beginner
3,102 Views

Do you mind me asking what settings you used for the disparity generation? Were you using OpenCV/StereoBM, or custom code?

Lorp

0 Kudos
ROhle1
Novice
3,102 Views

It is custom software. On and off again for a LONG time:) My area of interest is quantitative analysis of optic nerve structures.

When Sony comes up with the next generation A7s I think I might be there... right on the cusp.

The algorithm doesn't use neighborhood statistics, but it does have problems with edges; so, I'm thinking that running my images through

some of Intel's routines might make my life easier.

Of course what would make my life perfect would be for Intel or one of their trusted partners to build a stereo fundus camera

around the D400 series:)

Regards

Rich

ps... I heard back from Click... Apparently I hit the wrong Support button:)

0 Kudos
PLore1
Beginner
3,102 Views

I tested several pixel offsets based on your comment - the second image needed to be shifted to the left by 73 pixels. Once this is done, then OpenCV's disparity generation features can be used to get correct results. The results were good, but not necessarily better than the on-board processing results. I'll be doing some more investigation into the on-board processing results sometime over the next week or so.

0 Kudos
ROhle1
Novice
3,102 Views

I've never used OpenCV, but it appears that many of the features that people want are going to depend on it.

I would expect it to be very slow compared to the VPU... true?

Next time you are working with the camera, try turning the projector off and then put the results through a comparison with OpenCV.

In one of the docs it says that under some circumstances to improve results an additional projector can be used.

0 Kudos
PSnip
New Contributor II
3,102 Views
0 Kudos
PLore1
Beginner
3,102 Views

Just as a quick update, I have observed that the 73 pixel shift observed in my one example doesn't seem to be consistent. The offset varies - the last scan I did had a 1 pixel offset instead. So ultimately, while the OpenCV disparity approach can work, the pixel offset needs to be determined dynamically before generating the disparity map.

Lorp

0 Kudos
ROhle1
Novice
3,102 Views

That is just the oddest thing. When I looked at the right IR image that you posted, I would have sworn that you had been playing with it and just posted the wrong version.

If the camera is doing this, I can imagine several reasons why, but I would probably be wrong about all of them.

If there is actually a reason why the camera would do this... other than a firmware bug... then it would be pretty easy to compute the translation and corrected it.

Just take a vertical line from the zero column of the right image... and then look for the last exact match as you compare to the right.

Regards,

Rich

0 Kudos
PSnip
New Contributor II
3,102 Views

Hi Anders and Jesus,

Section-3.c "Recalibrate" of the white paper you shared earlier on Depth tuning mentions that, "There is a dynamic calibration app that can be used, or for best results we recommend getting the Intel OEM Calibration Tool and Chart."

Can you please share more information/documentation on this. Any link to White-Paper/Application-Note on recalibration of D4xx?

Regards,

PS

0 Kudos
Anders_G_Intel
Employee
3,102 Views

Please see if the links at the bottom of this page help:

https://realsense.intel.com/intel-realsense-downloads/ Intel® RealSense™ Downloads, Documents and Tools

0 Kudos
ROhle1
Novice
3,492 Views

Reply