- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Guys,
I'm totally new to this field so this question may sound strange but here we go:
Is it possible to display various features like object recognition, person tracking, SLAM and gesture recoignition (additionally the raw data/video stream too) at the same time on different screens?
From my understanding what I learned during my research it should be possible and only limited by the hardware resources.
Because the camera only sends video streams and the functions for the features run on the machine the camera is connected to and not the camera itself.
Is my assumption correct?
At the end it should run on Ubuntu 16.04 with ROS Kinetic.
Don't know what hardware is need for this except USB 3.0 as interface for the camera...
Sincerely, Alex
PS: I hope the thread/question is in the correct categorie
Link Copied
- « Previous
-
- 1
- 2
- Next »
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
"You can use SLAM with SDK 2.0 by using it in combination with a SLAM system for the OpenCV platform called ORB_SLAM2."
Does this mean that the SDK library doesn't provide ready to use code for applications like object recognition, gesture recognition?
But instead I need to use OpenCV for example to do this tasks?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Yes, the functions that you mentioned need be be obtained by combining SDK 2.0 with other software platforms such as OpenCV, ROS, etc.

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- « Previous
-
- 1
- 2
- Next »