- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I developed a university project under the "Robótica-Autismo" project at University of Minho, Portugal. It consists in a emotion recognition system that is able to detect 6 diferente facial expressions. This system is interfaced with a robotic platform Zeno R50 from Robokind. A video of the experimental setup is available in the follow link: https://www.youtube.com/edit?video_id=sRbZPZVENuU
This system uses the recente intel RealSense with machine learning techniques in order to classify the User's facial expression.
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Actually the video is available on this link: https://www.youtube.com/watch?v=sRbZPZVENuU
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
How can i train new hand gestures/poses using intel realsense ?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Samuel,
You can use the hand skeleton that is available in the Intel RealSense SDK. With the joint positions data, you can train a machine learning model in order to classify certain gestures. For my project I used Support Vector Machines (SVM), but you can use another machine learning method. Since I developed my project in C#, I used a C# machine learning library: The Accord Machine Learning.
In the literature you have a lot of articles that uses Kinect and machine learning approaches for classifying gestures.
Best regards,
Vinícius Silva
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page