- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello
I would like to use Camera Realsense R200 convert body language (belly-to-head) into speech.
I think it should use Machine Learning and OpenCV library.
Anybody can help me clarify my ideal, and how to do it?
Thank you.
P.s: I have to use the R200 camera in this project. And I have an Up Board, I will create a portable device can carry everywhere I want.
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Information about the position of areas of the body can be captured using the R200's 'Person Tracking' system.
https://software.intel.com/sites/landingpage/realsense/camera-sdk/v1.1/documentation/html/index.html?doc_pt_person_tracking.html Intel® RealSense™ SDK 2016 R2 Documentation
My own approach is to animate a full-body avatar based on camera inputs and make logic decisions based on what the joints of that avatar are doing. This is done using a custom system I built called CamAnims. My blog post below explains the basic principles of CamAnms.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Body language include position of fingers, can R200's 'Person Tracking' capture it?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The R200 unfortunately does not have support for finger joint tracking like the SR300 camera model does. It can only follow the palms through a method such as Blob Tracking.
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page