- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
Hello!
Is there support within the Intel PME Library for automatic gesture spotting? Or alternatively, are there techniques anyone has utilized to implement automatic gesture spotting with the Curie?
I'm trying to implement a system to detect gestures (like hand waves, etc) using the Curie's on board accel without requiring the user to mark when a gesture begins and ends.
Thanks!
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
Hi Wizink,
Thanks for reaching out.
Let me investigate more about your concerns, and I will get back to you as soon as we have some helpful information.
Have a nice day.
Regards,
Leonardo R.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
Hi Leonardo,
Were you able to find any more information about gesture spotting support?
Thanks!
Jonathan
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
Hi Jonathan,
Yes, the information that we have is: The PME Library in the Arduino 101 Boards Manager package and Curie ODK are designed to primarily provide access to the pattern matching engine hardware and illustrate how to use its learning function, how to save knowledge data, how to restore knowledge data, how to use the radial basis function and k-nearest-neighbor functions. There is an additional and vary basic demonstration of the pattern matching capability using motion. The purpose is to make it available to wide audiences to explore, educate and provide the freedom to build innovative solutions using the Curie's built-in sensing and any data desired from the wide variety of sensors that can be utilized from the Arduino community. As it is intended for broadest audiences, it doesn't contain specific solution domain examples. We are absolutely interested in contributed open-source examples from the developer community, which can be submitted via the normal Github pull-request process. This is the repository: https://github.com/01org/Intel-Pattern-Matching-Technology.
Intel does provide the Intel Knowledge Builder cloud-based service to work with data collected by developers and optimize and normalize that data using advanced analytical techniques and produce knowledge packs that can be deployed for specific product solutions. The Knowledge Builder is available through contact with their Intel Sales Representative. You can find more information about it here: https://software.intel.com/en-us/intel-knowledge-builder-toolkit
I hope you find this helpful.
Thank you for your patience.
Have a nice day.
Regards,
Leonardo R.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
Hi Jonathan,
What you're trying to do should be possible with the Pattern Matching Engine, the challenge will be how you choose to filter your incoming IMU data for the gestures you're trying to catch. The PME takes 128 bytes of data and compares it against the learned 128 byte patterns. If you look at Eric's "Drawing in Air" example, most of his code is about streaming a large chunk of IMU data into a buffer, then filtering it down to 128 bytes without losing the integrity of the pattern. Using a button to signify the beginning and end allows you to capture the whole gesture and pair it down to the same size, even when performed at different speeds. https://github.com/01org/Intel-Pattern-Matching-Technology/tree/master/examples/DrawingInTheAir Intel-Pattern-Matching-Technology/examples/DrawingInTheAir at master · 01org/Intel-Pattern-Matching-Technology · GitHub
An alternative approach would be to do some basic filtering of data as it streams in from the IMU in real time, capture the stream in a 128 byte buffer, then feed the buffer to the PME over and over looking for a match. If you look at the learning example from the General Vision Library, that's the approach they take. http://www.general-vision.com/software/curieneurons/ http://www.general-vision.com/software/curieneurons/ In the General Vision example, they simply push a new data point every X milliseconds into an extremely short revolving buffer. That makes it work great for catching impulses, but not for complex gestures. But it DOES work without the button press.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
intel_corp thank you for your response.
LucasAinsworth yes, I have been heavily referencing the "Drawing in the Air" example. The approach I've taken is essentially what you have suggested: conduct a basic rolling average of data from the IMU, capture it to a buffer, then feed it to the PME once the buffer is full. A few challenges I anticipate: the teaching of a number of null gestures which will probably limit the total number of unique gestures the system can recognize and requirement of a "debounce" to prevent the same gesture from being identified multiple times in rapid succession.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
Hi Jonathan,
It was a pleasure to help you. Feel free to contact us when you have doubts.
Have a nice day.
Regards,
Leonardo R.

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page