I'm new to Linux, new to depth-cameras, saw my first Arduino 7 months ago - you get the picture.
My main focus is with ROS and naturally my preferred OS is Ubuntu. Ubuntu 16. And ROS Kinetic. Why don't people update? Not talking to only you guys, but all over the place, Igloo and Jade; why??
Long story shorty, I've wasted 30 hours of my life, before going over to Ubuntu 16 on the RTF. I expected the cameras to be inaccessible, but I cannot seem to localize the FCU itself - thus I cannot launch Mavros and the whole device is a waste of space at this moment. I've tried the normal combinations, ttyS1, 15000000 baudetc. I was wondering if I removed the FCU altogether while removing Yocto, but connecting telemetry through some old 3dr 433mhz' gets the job done and shows there is life...Extremely slowly, uselessly unreliable.
I'm wondering if I could solder a direct connect from the telemetry port to the OTG, but I don't know the wire-organization and it will become a hazzle to jiggle between that and keyboard/mouse/peripherals.
Is there any straightforward way to just get control of my ESC's, IMU's and GPS?
It's really a small order - given the price I've paid for this thing that seems to be able to do absolutely nothing. TBH it seems like someone has put in a real effort to make this look like an open development platform - while really being open for development only in Intel's explicit commercial interest...?
Sorry for the tone. I'm exhausted, I don't like quitting before I figure something out, so know I'm really pissed.
Thanks for reaching out.
There is no need to apologize, if I were in your position, I would feel the same way.
The official image that is supported by the RTF is Yocto Project, as you can see in the official documentation there is a lot of information using it with Yocto: https://github.com/intel-aero/meta-intel-aero/wiki.
Personally, I don't have experience using the RTF with Ubuntu and there isn't information about this, so the best recommendation that I can give you is to use Yocto Project, because all the updates that the RTF has are based on this OS and you will be able to use all the functionalities available.
If you check the information https://github.com/intel-aero/meta-intel-aero/wiki/01-About-Intel-Aero# oses-and-sw-development-methods, you will see that there is a disclaimer that says Ubuntu and other Linux distros may be installed on Intel Aero, but some functionalities may or may not work.
I know that you are new in Linux, but I can assure you that you won't have issues using Yocto, and if you have issues, don't doubt to contact us, we will be happy to help you.
Have a nice day.
Should I write a new inquiry every follow-up, or..?
I realize that I might have to go along with the yocto-os, or shelve it for now. The big "if" here, is the camera. I've seen so many "yes camera is integrated" and "no the camera is not yet supported" that I don't know what to believe. Probably there is a difference between "officially supported" and "actually working"; my interest is obviously in the latter: Can I utilize the R200 camera to perceive depth in an application ran on-board the drone?
I attempted to follow another users gstreamer-pipeline, and achieved same green screen. I also used the demo a while ago (you write something and you see different images on the hooked-up HDMI), so I'm kinda confused. Is the R200 ready for use (with yocto) or is it not?
Is it possible to upgrade, or at least make compatible, the Kinetic rather than Indigo distribution of ROS?
Before switching to Ubuntu I also attempted to use ROS, and I had problems with assigning another master. Is this another obstacle that would have to be overcome, or am I missing something? (I had hooked the drone up as a client rather than AP if that matters).
It's kinda like no matter what kinda approach I take with this, I'm going to have to make some pretty big sacrifices, compared to just hooking a joule up to a depth-camera with a pixhawk? I mean; where is the "aha, awesome" part?
I will try to answer all your questions in this reply.
About the R200 camera, yes it is ready to use, and you definitely can perceive depth in an application ran on-board the drone. You can use all the examples as a reference for your application: https://github.com/intel-aero/meta-intel-aero/wiki/RealSense. I'm also getting the same green screen, and I think that it is how it works, but let ask more about this.
If you need more information to use the RealSense camera R200, consider that it uses librealsense, and its repository has a lot information about it: https://github.com/IntelRealSense/librealsense.
Regarding the Kinetic Kame distribution, it is not possible yet. At the moment, the supported distribution is Indigo Igloo, but this is a good question, and I will pass this information to the proper team to consider it in future releases.
The Intel Aero Ready To Fly ("RTF") kit is an awesome product if you give it a try, you can start coding and run flight tests without assembling a drone, just remember that this kit is not a consumer product but a software development kit for professionals.
I hope you find this information helpful.
Well OK, I guess I'll give it another go then.
Thank you for the clarification and assistance, I'll be back after reinstalling yocto and running into some issues