- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I've tried extensive online searching to no avail.
I would like to know if the current finger tracking in RealSense is able to track visible fingers while a hand is holding an object.
For example, say I were holding an orange, and my pinkie and thumb were obstructed from the camera's line of sight by the orange: would the camera be able to track the three fingers which are in sight? Or say if I were holding the stem of a toy light saber, would I be able to track the parts of the fingers / hand which are in sight of the camera even if the finger tips and thumb were hidden from view?
Thanks~
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
If you set your project to track JT 1 (Joint 1, the knuckle in the middle of the finger) then you are almost guaranteed to have smooth tracking even when the fingers are closed around an object. This is because the middle knuckle will be visible to the camera whether your hand is held palm-forward or side-on.
The thumb-tip is also a good option for tracking whilst the fingers are closed. Its position on the RL hand means that even when all four fingers are closed, the tip should still be visible to the camera most of the time.
When using the Unity game engine version of RealSense, you can also hedge your bets by setting up more than one landmark to track. So if you set it to watch both the Joint 1 and Thumb Tip, one of those will take over tracking seamlessly if the other becomes obscured from the camera's view.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks!
To be more clear, I'd like to work with in-hand 3D scanning for objects. With the depth data, the background of a scene is easily cropped, which just leaves the hand to obfuscate the object - if the RealSense finger tracking can locate the contour of the hand, then the hand can also be cropped. Basically it would enable a very easy to implement "remove hand/background from object" function. The main focus can then be on loop-closing, etc. I notice the same has been done with a Kinect, but it requires a lot of book keeping to figure out where the fingers end and the object begins.
For reference:
http://pointclouds.org/documentation/tutorials/in_hand_scanner.php
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
There are people on this forum with far more experience in using RealSense for 3D scanning than I have, as it's something I've not actually done myself.
You may be interested in something Intel announced last week at their IDF 16 developer conference called Project Alloy, a merged-reality headset powered by RealSense where you can bring real-world items into a virtual environment and have those RL objects interact with the virtual ones.
https://www.engadget.com/2016/08/16/intel-announces-project-alloy-an-all-in-one-vr-headset/
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page