Software Archive
Read-only legacy content
17061 Discussions

Unity Tip: Reducing Joint Lock-Up Through TrackingAction Editing

MartyG
Honored Contributor III
683 Views

Hi everyone,

During one of my frequent experiments with RealSense, I was looking through the code of the TrackingAction Unity script in the SDK Toolkit and noticed that line 239 of the code referenced the X, Y and Z axes in one line but then only defined references for X and Y in the following lines (241 and 242).

1.jpg

So I copy-and-pasted in a line that referenced Z.

2.jpg

Then I ran my project to see what the effect would be on my full-body avatar.  Previously, it had been suffering from an issue where the arms would lock up when the hands were at certain Z-distances from the camera and would need tracking to be dropped and re-acquired to reset them to the correct position.

With this additional line of code in though, I now could not make the joint lock-up occur no matter how long I tried.  And it was even easier than before to reach the arms across the front of the chest and touch the other hand.

3.jpg

That isn't to say that everything was perfect.  With this new Z-code installed, the arms seemed more prone to bending the wrong way when moving the hands in the Z-direction (towards and away from the camera), and pulling the arms inwards towards the sides of the body.  

This isn't necessarily because of a problem with the script though but rather with the way that the arms' physics colliders and how the arm reacts when physically colliding against the torso's collider field.  You may not experience such issues in your own project, especially if the objects that you are controlling have a simpler structure than this avatar arm.

Edit: moving the avatar's shoulders forwards from the center of the torso sides (an unnatural position they had been put in to help deal with the joint lock-ups) to a human-like position adjacent to the front of the chest reduced a lot of the arm movement problems introduced by the new line of code.  So as I thought, it was due to colliders rubbing together and pushing the arm around rather than a breakage caused by the extra line.

IMPORTANT NOTE

If you want to give this Z-code technique a try yourself, I can't stress highly enough the importance of having your project fully backed up first so you can rapidly restore if something goes wrong, and also editing a COPY of the TrackingAction script rather than the original.  The link below is to an article I wrote a while back that explains how to make multiple individually numbered TrackingAction script copies.

https://software.intel.com/en-us/forums/topic/549805

Though if you have a full backup of the project and are happy to experiment, you may wish for the sake of convenience to edit the original TrackingAction that is already installed throughout your project if you know that you can easily restore the original from backup if something goes wrong.

If I haven't stated it enough times - BACKUP FIRST.  :)

Edit 2: I fixed the final problems with the shoulders after some more shoulder-position and collider tweaking and tested the rest of the rotation-based avatar functions extensively.  These were the results found:

-  The ease of movement of the arms and the intricacy of achievable poses was increased hugely by the extra line of code.  The arms now behave just like human ones, capable of every kind of action (touch one hand to another, touch the opposite elbow, stroke the face, etc).

-  The Z-powered upper body joint that enables the avatar to lean backwards and forwards at the waist could now lean back and forth further.

-  The Z-powered eyebrows and lips seemed to be a bit easier to move, though - as has always been the case - the amount of movement increase was not as dramatic as hand-controlled actions due to the shorter travel distance of face landmarks compared to hand landmarks.

0 Kudos
11 Replies
MartyG
Honored Contributor III
683 Views

Thanks very much for making this post a sticky at the top.  ^_^

0 Kudos
Ariska_Hidayat
Beginner
683 Views

Wow, Is there any video or applications that can be tried? : D

0 Kudos
MartyG
Honored Contributor III
683 Views

Hi Ariska!  A new September demo of the RealSense-powered full-body avatar has been created this morning.  It contains the very latest advances in our arm control system, bug fixes and also incorporates the fix for Z-angle locking in this article.

http://sambiglyon.org/sites/default/files/RealSense-Avatar-Sept-2015.zip

0 Kudos
MartyG
Honored Contributor III
683 Views

A few of the Unity Toolkit SDK files generate yellow error warning messages in Unity 5.1 because elements in them are deprecated or obsolete in 5.1.  If these messages generate multiple times then they can be a drag on the camera's performance because Unity has to keep pausing and thinking about the script element it has a problem with.  So I decided to start looking for fixes in these scripts too.

Note: if your Unity RealSense project is small and simple then it may not be worth making these changes, as the running speed is not likely to be greatly impacted by the generation of the messages.  Chasing after performance gains is most worth it when you have a large, complex environment with many scripts and still have further features to add yet..

These error messages are only generated in the Console window of Unity when doing a full Build and Run test of the project, and not when running in preview mode in the editor.

Also bear in mind that if these changes have not been officially implemented by the time of the next SDK release, updating your project to the latest Unity Toolkit in that release may overwrite your edits, necessitating having to go back into the scripts and make the edits again.

FaceTrackingRule.cs (in the RSUnityToolkit > Internals > Rules folder)

On line 181, a variable called 'position' was defined but never used anywhere else in the script, according to the Unity error message.

1.jpg

Tests found that the yellow error could be eliminated by deleting lines 181-184 completely from the script, and there was no negative impact on face tracking.

There are a few other scripts in the Toolkit with such unused variables but they are more complex than this one and look more breakable if tinkered with, so I left those, happy to just eliminate as many of the error messages as I could.  Next up was ...

SenseToolkitAssetModificationProcessor.cs (in the RSUnityToolkit > Internals > Editor folder)

The yellow error in this script says that the instruction 'FindSceneObjectsOfType' on line 90 is obsolete.

2.jpg

Unity now prefers instead the shorter instruction 'FindObjectsOfType'.

3.jpg

Again, making this change eliminated this particular yellow error and had no obvious impact on performance.

BaseActionEditor.cs (in the RSUnityToolkit > Internals > Editor folder)

In this script, the instruction EditorGUIUtility.LookLikeInspector() in line 80 is obsoleted.

4.jpg

Replace this with two new lines:

        EditorGUIUtility.labelWidth = 0;
        EditorGUIUtility.fieldWidth = 0;

This makes this particular yellow error disappear.

******

After removing these error messages, it seemed visually as though there was a mild performance gain in my project, but this kind of thing is hard to quantify in evidential data.  For me, the most important thing in doing this was to minimize anything that might be draining processing resources over time and cause degradation of performance as a user's session progresses.

 

0 Kudos
MartyG
Honored Contributor III
683 Views

Thanks Amr!  It already is kind-of sticky at the top of the forum, since the parent topic of this article ('Reducing Joint Lock-Up Through TrackingAction Editing') is sticky.  I take the point though that the error-message editing info in the comments would be easier to find if it was a separate article.  *smiles*

0 Kudos
MartyG
Honored Contributor III
683 Views

Thanks Ragenbo!  If you liked this article, you may also find useful my more recent one on editing the TrackingAction to control constraints, inverts and other TrackingAction settings whilst a program is running.

https://software.intel.com/en-us/forums/realsense/topic/601112

0 Kudos
Amir_K_2
Beginner
683 Views

Wow man great ! 

0 Kudos
Amir_K_2
Beginner
683 Views

Is there any video that can be tried ?

regards 
Amir

0 Kudos
MartyG
Honored Contributor III
683 Views

If you mean an app of the avatar that you can load in and play with ... I do not currently have a demo version available.

If you mean a YouTube video of the avatar moving ... I'm planning on making a new video but only have one from a year ago that shows a much earlier and rougher version of the avatar that didn't have a lot of the advanced capabilities and realism of the current version. But you can see the basic principles from the video.

https://www.youtube.com/watch?v=_wFvcmNM5xA

Edit: most of the RealSense techniques that my company develops usually get turned into how-to construction guides on this forum.  I thought it would be interesting though to mention here the latest avatar features that do not have guides yet.

*  The player character and computer-controlled characters can "see" each other's presence with a 'raycast' beam projected from each of them that is configured to only react to the character when the ray collides with them.  It can also detect whether two characters are facing towards each other.  If they can see each other, then they can read the virtual emotional expressions animated on their faces and  have their emotional state influenced by seeing those emotions.

*  The "flesh" sections of the avatar body can use a new feature introduced in Unity 5.2 called Collision Impulse to detect how hard a body section has been impacted by another object.  Our code then translates the impact readings into descriptive sensations such as "Soft", "Painful" and "Very Painful".  For instance, a gentle touch during a hug may generate a Soft sensation, whilst running into a chair would generate a Painful or Very Painful sensation.  This would be used as a factor in the calculation of the avatar's overall emotional state at that current moment.

0 Kudos
Adam_A_
Innovator
683 Views

Wow thanks for the article. I think we should make more Unity-Real Sense topics and tutorial :)

0 Kudos
MartyG
Honored Contributor III
683 Views

You're very welcome, Adam!  I've written many other guides to using Unity with RealSense.  A quick way to find them is to go to the search box at the top of the forum that has the words "powered by Google" in it and search for this term: 

unity tip realsense marty

It's been a while since I've posted a guide here as I've been occupied full time on building my company's RealSense game and its RealSense-powered full body player avatar.  This uses the TrackingActions in combination with systems I wrote myself to control Unity animation clips with the TrackingAction, moving back and forth through the animation in real-time to animate the avatar's body, limbs and facial features.  

Using animation clips in combination with carefully constructed logic programming enabled us to mostly neutralize drawbacks associated with using TrackingActions directly, such as "flipping out" an object's position when moving an object to a complex or extreme angle.  It also enabled us to set exact limits on an object's position and rotation so that its movement can never go outside of a defined range.  this is especially vital for control of an avatar with the camera, as you don't want the arms and limbs to twist into impossible positions.

Other features of our avatar include:

- lifting the hand above the head and waving the arm with a side to side shake of the RL hand

 - stretching the arm out fully to the side.  
 
- Realistic flexing facial and body flesh and breathing movements (e.g the chest rising and falling, moving the upper body with it, and the stomach going in and out)
 
- Independent control of waist-bending and leg crouching at the same time, with the ability to use the camera to use the RL head to turn the avatar head freely whilst crouched to look all around the floor..
 
- Side-stepping for easy reach of adjacent objects without having to turn the avatar around and walk back to that location
 
- 360 degree turning of the avatar left and right with turns of the RL head (either standing on the spot or changing direction whilst walking) so that the environment can be fully explored without having to physically walk around the RL room.. 
 

I made some new images of the project the other day.  I'l post them below.

1.jpg

2.jpg

Please pardon the weird kneecaps in the image above, we fixed that after this picture was made!  :)

 

0 Kudos
Reply