Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted
Beginner
1,440 Views

Converting the Depth Image to Point Cloud

Hello,

I am trying to do computer vision with the R200 camera attached to the Aero board. I am using Python and OpenCV and currently I am able to retrieve frames using the RTSP stream:

import cv2

rgbCap = cv2.VideoCapture('rtsp://192.168.8.1:8554/video13?width=640&height=480')

depthCap = cv2.VideoCapture('rtsp://192.168.8.1:8554/rsdepth?width=640&height=480')

This works but the depth frames are still blue coloured images with values ranging 0 to 255. I need to convert the depth into points [x,y,z].

Where do I find the camera intrinsic parameters on the R200 to go about doing this?

Also, if possible, how would I align the depth image to match the RGB image?

Thanks

Tags (1)
0 Kudos
3 Replies
Highlighted
Community Manager
149 Views

Hello Solias,

 

 

Thanks for reaching out!

 

 

I understand that you are using OpenCV and Python, however, I'd like to suggest you to take a look to librealsense ( https://github.com/IntelRealSense/librealsense), this library gives you more accessible controls to the camera and the data it captures. If you are interested on this, I'd suggest you to start by checking the following examples:

 

 

https://github.com/IntelRealSense/librealsense/blob/master/examples/cpp-alignimages.cpp

 

https://github.com/IntelRealSense/librealsense/blob/master/examples/cpp-tutorial-1-depth.cpp

 

 

I hope this information helps you,

 

Pedro M.
0 Kudos
Highlighted
Beginner
149 Views

Hi Pedro,

Thanks for the reply. Unfortunately I am not using C++ as most of my image processing code is in Python and plus I am using dronekit. I installed pyrealsense on my Ubuntu laptop but I am unable to get it to read the depth image from the rtsp stream. I looked at the examples you linked but they don't seem to show the camera intrinsic parameters. Is there a way to access them on the Intel board? I would also like to know the scaling done on the depth image to convert it back to proper disparity values.

0 Kudos
Community Manager
149 Views

I understand, first of all, I would like to mention that if your doubts are directly related on how to use the camera using the dronekit library, you should contact their developers for support ( http://dronekit.io/support) as they should be able to provide you with more accurate answers. The same applies for pyrealsense, this is not an official wrapper for librealsense hence for questions using it, our best suggestion is to contact the developers of the library ( https://github.com/toinsson/pyrealsense/issues).

 

 

Nevertheless, if you'd like to understand how the camera works, perhaps the documentation of librealsense ( https://github.com/IntelRealSense/librealsense/tree/master/doc) will be of help for you. This section in specific: https://github.com/IntelRealSense/librealsense/blob/master/doc/projection.md# intrinsic-camera-parameters, lets you know how the camera's intrinsic parameters are handled.

 

 

I believe that also in https://github.com/IntelRealSense/librealsense/blob/master/doc/projection.md, you will find more about depth images, it may also be of help.

 

 

Pedro M.
0 Kudos