I want to do image processing on depth data received from the Intel RealSense R200 on the Intel Aero rtf drone.
One option is trying to stream the depth data, I followed already this guide: https://github.com/intel-aero/meta-intel-aero/wiki/WiFi-Streaming https://github.com/intel-aero/meta-intel-aero/wiki/WiFi-Streaming. Streaming the RGB data from the Realsense with Gstreamer on Windows and with QGroundControl on Linux is no problem. There are commands mentioned in the guide to stream the depth data with Gstreamer on Windows, but it doesn't seem to generate output (there is no window opening like with the RGB stream). I also tried the commands found here: https://github.com/intel-aero/meta-intel-aero/wiki/RealSense https://github.com/intel-aero/meta-intel-aero/wiki/RealSense in a Ubuntu environment, yet it also seems not to generate any output. Is there another way to stream depth data? Or a more in depth explanation on how to make these commands work?
Another option for me is to capture the depth data in some files and do the image processing afterwards. How can I save a certain number of frames with depth info in a file on the drone?
Thank you in advance.
This code may help you to get the depth and store it in Mat OpenCV array (it works with me):
auto depthFrame = dev.get_frame_data(rs::stream::depth);
Mat frameDepth(Size(640, 480), CV_8UC1, (void*) depthFrame,Mat::AUTO_STEP);
You can find more useful examples here : https://github.com/IntelRealSense/librealsense/tree/master/examples https://github.com/IntelRealSense/librealsense/tree/master/examples .
I guess the one you need is cpp-tutorial-3-pointcloud.cpp.
Thank you for your help, that code does seem very helpful in my case!
I don't have any experience with image processing, may I ask why you chose to save the stream in a Mat OpenCV array? I don't see it used in any of the examples, is it practical to use in image processing or just personal preference?
How did you get your C++ code compiled on the drone? If I am not mistaken the libraries for the Intel Realsense camera (librealsense) are not yet on the Aero Compute board.
1- I need Mat object (which is a matrix) to do some processing on the frames. It is one of the main type in OpenCV to do processing for images.
check this article, it will help at the beginning :
https://software.intel.com/en-us/articles/using-librealsense-and-opencv-to-stream-rgb-and-depth-data... _Toc462147826 https://software.intel.com/en-us/articles/using-librealsense-and-opencv-to-stream-rgb-and-depth-data... _Toc462147826
2- At the beginning, I had suffered a lot until I was able to run the code on the drone. First, I develop the code in my personal computer and test it using RS300 (another realsenes camera). I use eclipse IDE (as explained in the previous link). After that you can compile the project and transfer an executable file to Aero board using SSH protocol or using USB. You must have the same version of libraries
in Aero board or the executable file will not work. Most of the time, I was not able to run the executable file so I send the whole project to Aero Board and compile it in the board but in this way you have to change the makefile and other mk files because the libraries on Aero may not be in the same path in your computer.
3- You can download the realsenselib from this link : https://github.com/IntelRealSense/librealsense GitHub - IntelRealSense/librealsense: Cross-platform camera capture for Intel® RealSense™ F200, SR300 and R200 and transfer it to Aero or you can download it directly on Aero after disable AP as explained in this link : https://github.com/intel-aero/meta-intel-aero/wiki/08-Aero-Network-and-System-Administration# networking-internet-access 08 Aero Network and System Administration · intel-aero/meta-intel-aero Wiki · GitHub
After that just follow the instructions here : https://github.com/IntelRealSense/librealsense/blob/master/doc/installation.md librealsense/installation.md at master · IntelRealSense/librealsense · GitHub .
I hope that what I wrote is helpful.
P.S: All what I wrote is just from personal experience so maybe there is another way to do that. The information I provided might not be 100% correct.It is better to double check
Your answer is certainly helpful! I am trying it out, but I am currently stuck at building the example code from the first link on my own Linux pc. Eclipse gives me errors on almost all objects, it seems like the libraries aren't recognized. I would also have to add extra libraries like cstdlib but they should already be included to my knowledge. Is this a problem you already encountered?
Because I am working with a deadline and retrieving data from the Realsense camera is not yet totally working I tried to work with examples of depth data I found online. The only one I could find is an image of a wall. This already gives me an idea of the format, but doesn't generate quite the challenge as to interpreting the data. (https://software.intel.com/en-us/articles/realsense-depth-data https://software.intel.com/en-us/articles/realsense-depth-data)
Would you happen to know somewhere I could find more of this sort of datasamples from the Realsense?
Sorry to be late to respond.
I just depends on the few examples in the links I mentioned above. Honestly, It is quite difficult to work with libraries and link them correctly so that the compiler can work. You should see the error message and try to solve it.
Please do not hesitate to ask If there is anything more I can help.
So far I didn't get much further with compiling the file on the drone. I got an output from the Realsense but not in a format I can use (I have the same problem as mentioned here /thread/113940 https://communities.intel.com/thread/113940 ).
Your answers have however helped me a lot! I will keep tying and post if I find out what the problem was.