Software Archive
Read-only legacy content
17061 Discussions

performance benchmarks - finger and hand data

Scott_W_1
Beginner
434 Views

I am about to start creating algorithms to distinguish between more gestures, but it would be VERY helpful if the Intel RealSense crew could provide some information about the built-in handtracking procedures which I could not find in the documentation.

I could use many parameters to make fin distinctions between many of the American Sign Language handshapes. The parameters I'm looking at include:

  • Local x,y,z rotations
  • calculating the distance between the thumb and various other joints
  • calculating the distance between the index finger tip and various other joints
  • foldedness

I'm considering either using the built-in gesture recognition and then refining the recognition with any of the above OR starting from scratch.

I COULD build a tool to use a recorded set of gestures, then measure how long each approach takes to recognize the gestures AND get the accuracy, but if you already have some information, it will help me tremendously.

In order to use the built-in gesture recognition and add my own, I would need to know how the .gesta and .xml files work in the handdata folder. If you can explain the organization of the data, I can configure my snapshot application to output the data in the same format. I could probably figure it out, but it would save time if you could provide that information to me.

Can you share any information on the gesture recognition algorithms you use? I don't want to duplicate what you have, I would test supplemental algorithms.

The XML file confused me. I tried setting the various parameters to true and false, but did not see any noticeable difference in the gesture recognition.

If I am to create my own algorithms, it would be helpful if you could explain the following:

  1. When I query the finger data or joint data, it looks like I am getting all of the parameters. Is there any measurable difference in the speed of retrieving joint rotations, vs. joint positions?
  2. Are there any measurable differences in the speed of retrieving fingerdata (foldedness) vs. joint data?
  3. Could you provide an example using the speed parameters? Right now I am focused on static poses, but two letters in the ASL alphabet have movement.
0 Kudos
1 Reply
You_Yun_L_
Beginner
434 Views

Hi

I also have problems to use the speed parameter when tracking the hand joints.

The realsense SDK provides an example named "FF_HandsConsole" which demonstrate the hand joint features.

However, when I try to print out the parameter "jointData.speed.x" (the same in jointData.speed.y and jointData.speed.z ), I always get "0" instead of  the speed of the joints.

Does anyone has any idea for this ?

  Thanks

 

0 Kudos
Reply