I am trying to do computer vision with the R200 camera attached to the Aero board. I am using Python and OpenCV and currently I am able to retrieve frames using the RTSP stream:
rgbCap = cv2.VideoCapture('rtsp://192.168.8.1:8554/video13?width=640&height=480')
depthCap = cv2.VideoCapture('rtsp://192.168.8.1:8554/rsdepth?width=640&height=480')
This works but the depth frames are still blue coloured images with values ranging 0 to 255. I need to convert the depth into points [x,y,z].
Where do I find the camera intrinsic parameters on the R200 to go about doing this?
Also, if possible, how would I align the depth image to match the RGB image?
Thanks for reaching out!
I understand that you are using OpenCV and Python, however, I'd like to suggest you to take a look to librealsense ( https://github.com/IntelRealSense/librealsense), this library gives you more accessible controls to the camera and the data it captures. If you are interested on this, I'd suggest you to start by checking the following examples:
I hope this information helps you,
Thanks for the reply. Unfortunately I am not using C++ as most of my image processing code is in Python and plus I am using dronekit. I installed pyrealsense on my Ubuntu laptop but I am unable to get it to read the depth image from the rtsp stream. I looked at the examples you linked but they don't seem to show the camera intrinsic parameters. Is there a way to access them on the Intel board? I would also like to know the scaling done on the depth image to convert it back to proper disparity values.
I understand, first of all, I would like to mention that if your doubts are directly related on how to use the camera using the dronekit library, you should contact their developers for support ( http://dronekit.io/support) as they should be able to provide you with more accurate answers. The same applies for pyrealsense, this is not an official wrapper for librealsense hence for questions using it, our best suggestion is to contact the developers of the library ( https://github.com/toinsson/pyrealsense/issues).
Nevertheless, if you'd like to understand how the camera works, perhaps the documentation of librealsense ( https://github.com/IntelRealSense/librealsense/tree/master/doc) will be of help for you. This section in specific: https://github.com/IntelRealSense/librealsense/blob/master/doc/projection.md# intrinsic-camera-parameters, lets you know how the camera's intrinsic parameters are handled.
I believe that also in https://github.com/IntelRealSense/librealsense/blob/master/doc/projection.md, you will find more about depth images, it may also be of help.