Software Archive
Read-only legacy content
17061 Discussions

Unity Tip: Minimizing Wrist Mis-Rotation Through TrackingAction Editing

MartyG
Honored Contributor III
525 Views

Hi everyone,

Since the launch of RealSense, I've been experimenting constantly to develop a way to prevent TrackingAction-powered objects from rotating if the Wrist type of tracking has not been selected.  Finally, I have an approach that resembles a solution.

Objects in Unity measure the time that it takes to complete the processing of a frame using an instruction called 'Time.deltaTime'.  This value varies each time - it may be slower or it may be faster.  This can lead to objects in an application such as a game moving at an irregular "frame rate", making the visual appearance of moving objects look jittery.  This is why a stable frame rate matters so much in games in particular.

If you have observed how TrackingAction-controlled objects react when they are turned with the wrist when you did not want them to be, you will see that they often rotate faster than when they are being moved with your intentional control scheme. This in turn often leads to Time.deltaTime having a noticeably higher value whilst that rapid wrist-generated object movement is occurring.  This may perhaps be because the burst of physics activity causes Unity to take longer to complete the processing of each frame of movement during the wrist turning. 

This means that if the object's Time.deltatime value exceeds a certain value then we can command Unity to freeze object rotation when this threshold is exceeded, helping to reduce occurrences of mis-rotation.

Below is my guide to adjusting the TrackingAction's C# script code to include such a monitoring mechanism. 

STEP ONE

Open up the TrackingAction script in your Unity editor and scroll down to line 200, where the void Update() function begins.

Directly beneath the Update() function, insert the following C# script code, placing it inbetween the Update() and updateVirtualWorldBoxCenter() functions.

	if (Time.deltaTime > 0.15)
		{
			SmoothingType = SmoothingUtility.SmoothingTypes.Stabilizer;

		}

		if (Time.deltaTime < 0.15)
		{

			SmoothingType = SmoothingUtility.SmoothingTypes.Weighted;

		}

What this script basically does is:

(a) Check the value of Time.deltaTime to find out how fast the object is moving.

(b) If that speed exceeds a certain value then automatically change the object's Smoothing type from whatever it is currently set at (I use Weighting in mine) to the Stabilizer type of smoothing.  What this usually does is cause an object's motion to freeze.

(c) Because our script is placed at the head of the infinitely-looping Update() function of the TrackingAction, it is continually checking the current speed value of Time.deltaTime.  As soon as the speed value falls below the value that we specified movement should stop at, the TrackingAction's movement type is switched back to the Weighting type of smoothing, and the object immediately begins responding to the user's inputs again.

If your project uses a smoothing type other than Weighted, you can customize it to your needs by changing the word Weighted to Spring or Quadratic.

STEP TWO

You will also need to find out how fast a TrackingAction-controlled object in your project travels in order to work out the maximum speed value at which freezing of motion kicks in.  You can do this by adding the following line to the script above.

Debug.Log(time.deltaTime);

This will cause the object's current speed to be displayed in the Debug Console area of Unity (selectable with the 'Console' tab option on the Assets panel).  By moving your object with your hand and watching the changing numbers in the Console, and then turning your wrist to mis-rotate the object, you can see what the typical speed value of your object is, and the range of values that the speed spikes up to during mis-rotation.

Through trial and error experimentation, you can find the Greater Than ( > ) value that will activate the switch of the smoothing type to the freezing Stabilizer.  Then by also placing that value in the Less Than ( < ) statement, you can maintain normal tracking controls for as long as Time.deltaTime remains below that critical threshold.

CONCLUSION

Unless you are very lucky, the mechanism in this guide will not completely eliminate wrist mis-rotation.  But it does at least give you a chance to reduce its effects and help to stop an object over-extending its safe rotation limits to the point where the object's rotation goes crazy.

I'll post any refinements I achieve in the technique to the comments of this guide.  Good luck!

0 Kudos
6 Replies
MartyG
Honored Contributor III
525 Views

I made a couple of tweaks to my approach that made detection of threshold-crosses more stable.

*  I added a multiplication sum (* 30) to Time.deltaTime.  This is because Time.deltaTime on its own measures processing speed per frame.  This can be an unreliable measurement, as the frame speed varies depending on the CPU and graphics capabilities of an individual computer.  By adding a multiplication sum, you make the measurement processing per second instead of per frame, and so the measurement is reliable no matter the capabilities of the computer that it is running on.

It should be noted that whilst multiplying Time.deltaTime often amplifies the movement speed of an object, it does not do so in this case as we are not actually modifying the Time.Deltatime used in the TrackingAction to calculate rotation.  Instead, all we are doing is looking at the Time.deltaTime value being generated in the object by the TrackingAction and seeing what that value looks like if it were being output as a speed per seconds value instead of speed per frame.  

*  I moved the checking code outside of the TrackingAction's main Update() function and into a LateUpdate() type function above it so that its processing is independent from whatever is happening in Update().

*  I adjusted the If statement's logic so that instead of switching the freezing Stabilizer weighting back to the normal weighting type if the Time.deltaTime value was below the critical threshold, I set it to switch on the Stabilizer for a period of 1 second (effectively freezing the object's motion for 1 second) and then switching back to the normal weighting and checking the Time.deltaTime value again to see whether it is high enough for the freeze to be instantly re-instated for a further second.

The reason for doing this was that the time.deltaTime values change so quickly that the Stabilizer freeze only lasted for a split second before the value fell back below the threshold and unfroze the object.  By freezing motion for 1 second after switching to Stabilizer, it gives feedback to the user that motion has been blocked.

Here's the updated C# code.

	void LateUpdate()
	{

		Debug.Log (Time.deltaTime * 30);

		if (Time.deltaTime * 30 > 3) {
			SmoothingType = SmoothingUtility.SmoothingTypes.Stabilizer;

			StartCoroutine (PauseTracking ());   
		}
	}

		IEnumerator PauseTracking() {

			yield return new WaitForSeconds (1);

		SmoothingType = SmoothingUtility.SmoothingTypes.Weighted;

		}

A disadvantage of this approach though is that you may need to create a uniquely named copy of the TrackingAction script (e.g TrackingAction2, TrackingAction3, etc) for each hand-driven TrackingAction object in your project, since those objects will likely be moving at different speeds and so will need their own uniquely configured Time.deltaTime threshold value.  

In my own project's full body avatar, for example, the lower arm moves a lot faster than the shoulder joint and so was generating a higher Time.deltaTime than the shoulder was, meaning that if the exact same script was used then the freeze action was constantly being triggered by normal movement instead of only by wrist turning.  This meant that the lower arm joint needed its own unique TrackingAction2 with a higher Time.deltaTime trigger threshold defined so that the rapid movement of the lower arm did not trigger it so easily.

0 Kudos
MartyG
Honored Contributor III
525 Views

I mentioned at the end of the comment above that a potential drawback with the method of wrist-turn detection described in this article was that every joint might need to have a uniquely numbered TrackingAction script with a rotation detection range configured to how that specific joint behaves.  However, I came up with a solution that neatly avoids this situation.

I asked myself why I would need to have wrist-turn detection code in every single hand-driven joint.   Since the purpose of the code was to tell the joint whether a wrist turn had occurred, it was only actually necessary to detect the wrist-turn once and then send out a 'freeze' command to all of the other hand-driven joints.

Here again is the script that I was using up until now.

void LateUpdate()

 Debug.Log (Time.deltaTime * 30);
if (Time.deltaTime * 30 > 3) {
SmoothingType = SmoothingUtility.SmoothingTypes.Stabilizer;

StartCoroutine (PauseTracking ());  
IEnumerator PauseTracking() {
yield return new WaitForSeconds (1);

SmoothingType = SmoothingUtility.SmoothingTypes.Weighted;

  }

Basically, when the rotation angle of the object containing the TrackingAction was within a certain range defined in the If statement, the Weighting type of the TrackingAction would be changed to the Stabilizer type, causing object rotation to freeze.  The WaitForSeconds command would then make the TrackingAction wait 1 second before switching the Weighting back to its usual type so that rotation could resume.

In the new system that I scripted, however, the freezing and unfreezing actions are removed from the If statements.  Instead, when wrist turn is detected the TrackingAction is told to jump to a separate function I placed in the script called RestartMotion(), which contains the freezing and unfreezing actions.  .It also tells the TrackingAction to send an activation signal to a list of the objects with hand-driven TrackingActions that also contain the RestartMotion function (but not the angle checking code).

Here is the full listing of the new script.

public void OnGUI()
	{

		rotationangley1_rightshoulder = transform.eulerAngles.y;
		rotationangley2_rightshoulder = transform.eulerAngles.y;

		// OR statement - || symbol
		// If you use AND - && - then both conditions must be satisfied
		// and not just one of them.
		// For a 'between range A and range B' instruction though, use &&


		if (rotationangley1_rightshoulder > 150 && rotationangley1_rightshoulder < 340) {
			RestartMotion ();
			GameObject.Find("Lower Right Arm Movement Pivot 2").GetComponent<TrackingAction7>().RestartMotion();

			GameObject.Find("Lannel Right Hand - Bend Up-Down").GetComponent<TrackingAction7>().RestartMotion();

} 

		if (rotationangley2_rightshoulder > 1 && rotationangley2_rightshoulder < 50) {
			RestartMotion ();
			GameObject.Find("Lower Right Arm Movement Pivot 2").GetComponent<TrackingAction7>().RestartMotion();
			GameObject.Find("Lannel Right Hand - Bend Up-Down").GetComponent<TrackingAction7>().RestartMotion();

			GameObject.Find("Right Middle Base Rotational Joint").GetComponent<TrackingAction7>().RestartMotion();
			GameObject.Find("Right Middle 1 Rotational Joint").GetComponent<TrackingAction7>().RestartMotion();
			GameObject.Find("Right Middle Finger Fingertip Rotational Joint").GetComponent<TrackingAction7>().RestartMotion();

			GameObject.Find("Right Index Base Rotational Joint").GetComponent<TrackingAction7>().RestartMotion();
			GameObject.Find("Right Index 1 Rotational Joint").GetComponent<TrackingAction7>().RestartMotion();
			GameObject.Find("Right Index Finger Fingertip Rotational Joint").GetComponent<TrackingAction7>().RestartMotion();

			GameObject.Find("Right Ring Base Rotational Joint").GetComponent<TrackingAction7>().RestartMotion();
			GameObject.Find("Right Ring 1 Rotational Joint").GetComponent<TrackingAction7>().RestartMotion();
			GameObject.Find("Right Ring Finger Fingertip Rotational Joint").GetComponent<TrackingAction7>().RestartMotion();

			GameObject.Find("Right Thumb Base Rotational Joint").GetComponent<TrackingAction7>().RestartMotion();
			GameObject.Find("Right Thumb Middle Rotational Joint").GetComponent<TrackingAction7>().RestartMotion();

		} 

}


public void RestartMotion()
{

		SmoothingType = SmoothingUtility.SmoothingTypes.Stabilizer;


	StartCoroutine (PauseTracking ());   
}

IEnumerator PauseTracking() {

	yield return new WaitForSeconds (1);


	SmoothingType = SmoothingUtility.SmoothingTypes.Weighted;
}

It is worth noting that in the very top line of the script, I am now using an OnGUI() type function instead of the LateUpdate() used in the previous version.  This is because scripts using the OnGUI type update several times per second instead of just 1 or 2 times per second like the LateUpdate does.  

This means that the check for wrist rotation is made more effective, because wrist rotation turns an object quite quickly, and the sooner that the script can detect that the object's angle has crossed into the 'danger zone' defined in the If statement through more frequent checking of the angle at a given instant, the sooner that the hand-driven objects can be frozen before they have a chance to travel far.

Another thing you may notice about the above script is that in its activation list only makes reference to "right" parts and not left ones.  This is because the left and right shoulders of my full-body avatar each have their own separate angle checking routine in unique TrackingActions (TrackingAction6 in the right shoulder and TrackingAction8 in the left shoulder.  The one in the left shoulder also has its own unique freeze activation list that only references the joints in the left arm, not the right arm.

What this means is that when wrist rotation is detected in the right arm, only the joints in the right arm will be frozen whilst the left arm is free to continue moving, and vice versa.  You may not need this level of complexity in your own project.  In the avatar in our project though, it was necessary because we had to ensure that only the arm corresponding to the one whose wrist had been turned by the user in the real world was affected.  If the other arm was also frozen even though the user had not turned its wrist, it would make for a frustrating play experience.

I'll keep updating the article as I further refine the wrist detection techniques.

0 Kudos
MartyG
Honored Contributor III
525 Views

I found a neat solution that enables wrist turning to be detected with only a small wrist turn to the left or right, stopping the object before it can mis-travel far.  

STEP ONE

Create an object that has all Position constraints locked and all of the Rotation axes unlocked.  Here is an image of my avatar's shoulder TrackingAction.

1.jpg

STEP TWO

Run your project and, whilst holding your hand in front of the camera and turning your wrist fully left and fully right, observe how the rotation values in the Inspector panel of Unity are changing in real-time.  

Work out which one of the three directional axes (X, Y or Z) changes the least in the plus and minus degree directions (what we we could a small travel range would be something like -350 and 10 (10 degrees either side of the object's 0 degrees start position).  The axis that changes the least should be the one that you set the angle detection script from earlier in this article to check.

The reason for this is that you want to be able to move your object around freely with the normal up-down-left-right hand controls without accidentally freezing the object's motion.  You only want the freeze to take place when you turn the wrists.  If you use the axes that change by the greatest amount to detect the angles, this means that you have to set a wide range of degrees that the hand can move freely in before the freeze triggers, otherwise you will be setting off the freeze accidentally almost constantly.  

A consequence of this though is that the wrist has to turn at least half to three-quarter way around before the freeze initiates though, meaning that your object has already mis-traveled quite a distance.  So it makes the counter-wrist mechanism almost pointless for the purposes of stopping mis-rotation.

However, if you identify the axis that changes the least during left-right wrist rotation then this means that it will be particularly suited to be your angle detection axis.  This is because wrist-turn driven objects usually travel further and faster than when the normal hand controls are being used.  Therefore, you can set a short range for the detection trigger (e.g -10 or +10 degrees), and the object will not travel far enough during normal (slower) up-down-left-right hand control to reach those trigger angles and set off the freeze at the wrong time.

In my own project, I found that of my three open rotation axes, Z traveled the least distance when the wrist was being rotated.  So I changed my script to set Z as the axis to run the degree value checks on.

public void OnGUI()
	{

	rotationanglez1_rightshoulder = transform.eulerAngles.z;

if (rotationanglez1_rightshoulder > 35 && rotationanglez1_rightshoulder < 340) {
			RestartMotion ();
			GameObject.Find("Lower Right Arm Movement Pivot 2").GetComponent<TrackingAction7>().RestartMotion();

			GameObject.Find("Lannel Right Hand - Bend Up-Down").GetComponent<TrackingAction7>().RestartMotion();

			GameObject.Find("Right Middle Base Rotational Joint").GetComponent<TrackingAction7>().RestartMotion();
			GameObject.Find("Right Middle 1 Rotational Joint").GetComponent<TrackingAction7>().RestartMotion();
			GameObject.Find("Right Middle Finger Fingertip Rotational Joint").GetComponent<TrackingAction7>().RestartMotion();

			GameObject.Find("Right Index Base Rotational Joint").GetComponent<TrackingAction7>().RestartMotion();
			GameObject.Find("Right Index 1 Rotational Joint").GetComponent<TrackingAction7>().RestartMotion();
			GameObject.Find("Right Index Finger Fingertip Rotational Joint").GetComponent<TrackingAction7>().RestartMotion();

			GameObject.Find("Right Ring Base Rotational Joint").GetComponent<TrackingAction7>().RestartMotion();
			GameObject.Find("Right Ring 1 Rotational Joint").GetComponent<TrackingAction7>().RestartMotion();
			GameObject.Find("Right Ring Finger Fingertip Rotational Joint").GetComponent<TrackingAction7>().RestartMotion();

			GameObject.Find("Right Thumb Base Rotational Joint").GetComponent<TrackingAction7>().RestartMotion();
			GameObject.Find("Right Thumb Middle Rotational Joint").GetComponent<TrackingAction7>().RestartMotion();

}

	}


public void RestartMotion()
{

		SmoothingType = SmoothingUtility.SmoothingTypes.Stabilizer;


	StartCoroutine (PauseTracking ());   
}

IEnumerator PauseTracking() {

	yield return new WaitForSeconds (1);


	SmoothingType = SmoothingUtility.SmoothingTypes.Weighted;
}

Now when I ran my avatar project, it only took a small left or right wrist turn to activate the freeze, yet the freeze never occurred during use of normal hand controls.  Because I had set up a freeze activation list in the arms but not included the TrackingAction in the wrist in this list, I was now able to use left-right wrist turns to rotate the avatar's hands without the rest of the arm rotating with it.  I had finally succeeded in conquering wrist rotation!

0 Kudos
MartyG
Honored Contributor III
525 Views

After successfully negating object mis-rotation caused by the user turning their wrist, I noticed a different problem.  Although the fingers could no longer flex open and closed when the wrist turned, they were still flexing when the user's hand was moved up and down during arm movement.

This is a problem that has long been present in the facial-driven parts of my avatar too - the TrackingAction interprets the raising and lowering of the head as movement of a landmark, when in fact you only want those objects to move when the actual landmark assigned to them (eyebrow, eyelid, etc) is moved.

Fortunately, my many experiments with RealSense over the past year and a half gave me the knowledge to work out a solution relatively quickly.  What I did was to build a hand-driven 'rocker switch in a box' mechanism that would freeze finger joint movement when the rocker was moved up and down by the hand and touched the edges of the box.  When the rocker detached from the box surface, normal control would switch back on, enabling the fingers to be flexed.

To enhance the accuracy of this rocker, an auto-reset script was added to the rocker that would automatically return it to the flat horizontal 0 degree angle when the user stopped applying control effort.  This system would prevent the fingers from mis-flexing when the hand was moving but allow the fingers to be opened and closed when the hand was held relatively still, as the rocker auto-returned to its center point and re-activated the finger rotation.

Below, I will show step by step how this system was constructed.

STEP ONE

Construct a long, thin box from four thin Cube type objects. This box will house the rocker switch.

1.jpg

STEP TWO

Create a long thin Cube to act as the rocker switch and slide it inside the box.  It does not really matter how long it is so long as it is longer than the width of a standard default Cube so that the ends of the rocker can touch the top and base of the box almost as soon as it starts rotating up and down. 

2.jpg

STEP THREE

Place a TrackingAction inside the rocker and set its constraints so that it is only free to move in the X Rotation axis.

3.jpg

STEP FOUR

Create a C# script file inside the rocker that will reset the rocker's rotation angle to 0 when the user stops actively applying movement control (e.g when they hold their hand stationary in front of the camera, with tracking still active but the object not being moved).  Give it the following C# code.

void Update () {

		var newRotation = Quaternion.LookRotation (transform.position - transform.position).eulerAngles;

		newRotation.x = 0;

		transform.rotation = Quaternion.Slerp(transform.rotation, Quaternion.Euler(newRotation), Time.deltaTime);
	
	}
}

If you want to modify the script for your own project, you can change the angle that is auto-reset by changing newRotation.x to newRotation.y or newRotation.z., and change the angle that it resets to by changing '= 0' to a different angle.

STEP FIVE

Open in the Unity script editor the TrackingAction that contains the ResetMotion() function that we defined earlier in this article.  Below that block of code, create another new function called something like ResetMotion_Cancel.  When the rocker switch moves away from the surface of the box, we will be telling the box to send an activation signal to the finger joints to unfreeze them by changing their Weighting type back from 'Stabilizer' to 'Weighted'.

Here is the code I used for my ResetMotion_Cancel function.

	public void RestartMotion_Cancel()
	{

SmoothingType = SmoothingUtility.SmoothingTypes.Weighted;
	}

STEP SIX

Highlight the top and bottom box edges and on each of them, place a tick in the 'Is Trigger' option of the object's collider field in the Inspector panel of Unity.  This tells Unity that when the collider field of another object (in this case, our rocker switch) interacts with the collider of the box edge, an event is supposed to be triggered.

4.jpg

STEP SEVEN

Highlight the rocker switch and add a Rigidbody physics component to it by going to the 'Component' menu of Unity and selecting the sub-options 'Physics > Rigidbody'.

Give the Rigidbody the settings shown below - non-Gravity, non-Kinematic, and all constraints enabled.  

5.jpg

It does not matter in this case that we are locking all constraints, since the TrackingAction ignores the constraints of a Rigidbody anyway unless the 'Effect Physics' box at the very base of the TrackingAction is ticked.  We just want to make sure that our rocker switch cannot be moved from its pivot position at the center of the box if a non-TrackingAction object somehow collides with it.

The reason we need a Rigidbody in the rocker is that the trigger that we have defined in the collider field of the box sides will not be set off when the rocker collides with it unless the Rigidbody is present in the rocker.

STEP EIGHT

Create another new C# script file in the top board of your box and place a copy of the same script file in the bottom board too.  I gave the script the following C# code.

void OnTriggerEnter(Collider other)
	{

		GameObject.Find("Right Middle Base Rotational Joint").GetComponent<TrackingAction7>().RestartMotion();
		GameObject.Find("Right Middle 1 Rotational Joint").GetComponent<TrackingAction7>().RestartMotion();
		GameObject.Find("Right Middle Finger Fingertip Rotational Joint").GetComponent<TrackingAction7>().RestartMotion();

		GameObject.Find("Right Index Base Rotational Joint").GetComponent<TrackingAction7>().RestartMotion();
		GameObject.Find("Right Index 1 Rotational Joint").GetComponent<TrackingAction7>().RestartMotion();
		GameObject.Find("Right Index Finger Fingertip Rotational Joint").GetComponent<TrackingAction7>().RestartMotion();

		GameObject.Find("Right Ring Base Rotational Joint").GetComponent<TrackingAction7>().RestartMotion();
		GameObject.Find("Right Ring 1 Rotational Joint").GetComponent<TrackingAction7>().RestartMotion();
		GameObject.Find("Right Ring Finger Fingertip Rotational Joint").GetComponent<TrackingAction7>().RestartMotion();

		GameObject.Find("Right Thumb Base Rotational Joint").GetComponent<TrackingAction7>().RestartMotion();
		GameObject.Find("Right Thumb Middle Rotational Joint").GetComponent<TrackingAction7>().RestartMotion();

	}

	void OnTriggerExit (Collider other)
	{

		GameObject.Find ("Right Middle Base Rotational Joint").GetComponent<TrackingAction7> ().RestartMotion_Cancel ();
		GameObject.Find ("Right Middle 1 Rotational Joint").GetComponent<TrackingAction7> ().RestartMotion ();
		GameObject.Find ("Right Middle Finger Fingertip Rotational Joint").GetComponent<TrackingAction7> ().RestartMotion_Cancel ();

		GameObject.Find ("Right Index Base Rotational Joint").GetComponent<TrackingAction7> ().RestartMotion_Cancel ();
		GameObject.Find ("Right Index 1 Rotational Joint").GetComponent<TrackingAction7> ().RestartMotion_Cancel ();
		GameObject.Find ("Right Index Finger Fingertip Rotational Joint").GetComponent<TrackingAction7> ().RestartMotion_Cancel ();

		GameObject.Find ("Right Ring Base Rotational Joint").GetComponent<TrackingAction7> ().RestartMotion_Cancel ();
		GameObject.Find ("Right Ring 1 Rotational Joint").GetComponent<TrackingAction7> ().RestartMotion_Cancel ();
		GameObject.Find ("Right Ring Finger Fingertip Rotational Joint").GetComponent<TrackingAction7> ().RestartMotion_Cancel ();

		GameObject.Find ("Right Thumb Base Rotational Joint").GetComponent<TrackingAction7> ().RestartMotion_Cancel ();
		GameObject.Find ("Right Thumb Middle Rotational Joint").GetComponent<TrackingAction7> ().RestartMotion_Cancel ();

	}
}

What this script does is that when the rocker's end touches the upper or lower box edge, a 'BoxTriggerEnter()' function is triggered.  The code within this block sends an activation instruction for the 'ResetMotion()' movement-freezing routine in each of the objects on the list that contain our modified TrackngAction.

Obviously the objects in the list refer to the ones in my own project, and you should edit the list of GameObject.Find statements to point to the object and script / function names in your own project.

Beneath the OnTriggerEnter() function in the above script is an 'OnTriggerExit' function.  What this does is tell Unity to activate the code contained in this function when the rocker object's rotation moves it outside of the boundaries of the trigger-enabled collider (i.e it "exits").  The list of objects in this code block is the same as the OnTriggerEnter function above it.  The key difference is that instead of sending an activation signal to the rotation-freezing function (ResetMotion), it sends the activation to the unfreezing command (ResetMotion_Cancel) instead.

STEP NINE

Select one the TrackingAction-enabled rotation joints so that you can view its Trackingaction settings - in particular, the 'Weighted' setting - in the Inspector panel.  Then run the project and move your hand up and down.  When moving the hand slightly in the up or down direction, the status of the Weighted option should change almost immediately to 'Stabilizer', signifying that the object's rotation has been frozen and can no longer mis-rotate as you move the hand.

When you stop moving your hand, you should observe that the Weighted setting automatically returns to the 'Weighted' status, signifying that the joints are unlocked and can be moved.

6.jpg

STEP TEN - OPTIONAL

Now that I had finger-joint freezing working for the joints of the right hand, I created an identical rocker box dedicated to the joints of the left hand, in the same way that I made freezing of the right and left arms independent earlier in this article.  If your project is not a two-hand control one then you probably do not need to take this step.

CONCLUSION

Although I have not attempted it yet, in theory the rocker-switch system could also be applied to facial-controlled objects, with the rocker's control tied to the Nose Tip landmark, so that those parts no longer mis-rotate up and down when the head moves up and down, and can only move when the head is held stationary.

Edit: I tried a nosetip-driven rocker switch with the avatar's face-driven parts.  Movement of the rocker with the nose was unreliable and the facial parts dropped off the head of the avatar when their Smoothing type was switched to Stabilizer.  I altered the system to try to slow down face-part misrotation by increasing weighting to 20 instead of switching to Stabilizer, but this did not solve the problem of the unreliability of controlling the rocker with the nose.  So I abandoned trying to freeze face parts with this system for the time being.

Edit 2: I had better luck on a subsequent attempt to build a face-driven rocker switch.  I unticked the 'Bounding Box' setting on the tracking options, set the landmark to 'Nose Top' instead of 'Nose Tip', and adjusted the design of the rocker's box to just have a pair of thin planks at the front of the rocker.

1.jpg

The reason for the change in design is that when face tracking is acquired, the object movement is much more sensitive than it is with hand tracking.  This meant that the rocker had much more trouble maintaining a flat horizontal orientation when the user's head was stationary and level.  Also, it tended to tip up and down when moving the face towards and away from the camera.

A consequence of the sensitivity of the controls was that it was too easy for the back end of the rocker to touch the ceiling of the box and trigger the freeze condition in the project's face-controlled objects at the wrong time.  By having the trigger blocks in the form of thin planks at the front end of the rocker, this meant that the freeze could not be accidentally triggered by a strong tip, because if it did so then it was just swinging into empty air where there was no trigger present.

The design only solved half of the problem with a face-controlled rocker though.  There was still the matter of the face parts falling off the head of my project's avatar when switching from the 'Weighted' smoothing option to the freezing 'Stabilizer' one.  Curiously though, the rocker did not fall away from its box when set to Stabilizer, even though it was also face controlled, so this opened the possibility that the fall-away was due to the hierarchy of the objects in the avatar's construction rather than a default dislike of the Stabilizer setting by face-driven objects.

Edit 3: I found the solution to the problem with switching from 'Weighted' to 'Stabilizer'.  It was in the smoothing value.  Stabilizer didn't like it if the smoothing value was greater than 1 and was forcing the position of the objects' axes to reset to zero, burying them inside the avatar's head.  Using a Stabilizer value of 1 prevented the objects from disappearing.  They were now moving a little faster than I would have liked (the typical weighting value of face parts in my avatar was 4 or 5) but it was a relief to at least have the freezing mechanism working great for the face now.

Edit 4: I worked out a simple solution to the problem of Stabilizer only being able to use smoothing values of less than 2, making facial part movement too fast.  Above the statements that change the Smoothing type, I added a line that alters the smoothing value suited to a particular Smoothing type before making the change.  So with the freezing instruction, I set smoothing to '1' before switching to Stabilizer.  And when changing back to the Weighting type, I changed the smoothing value from '1' back to '4' before making the mode change.

public void RestartMotion()
	{

		SmoothingFactor = 1;
		SmoothingType = SmoothingUtility.SmoothingTypes.Stabilizer;

		StartCoroutine (PauseTracking ());   
	}

	IEnumerator PauseTracking() {

		yield return new WaitForSeconds (1);

		SmoothingFactor = 4;
		SmoothingType = SmoothingUtility.SmoothingTypes.Weighted;
	}

	public void RestartMotion_Cancel()
	{

		SmoothingFactor = 4;
		SmoothingType = SmoothingUtility.SmoothingTypes.Weighted;
	}

The movement speed of the facial parts worked perfectly again once these changes were added to the TrackingActions.

0 Kudos
MartyG
Honored Contributor III
525 Views

Once I had installed effective controls in the facial parts of my avatar for freezing mis-rotation when the head was lifted or raised, I could now consider re-introducing other features that had been experimented with in the past but dropped because of the aforementioned mis-rotation mis-triggering those features.

I began by installing a new and improved version of a system that changed the eye pupil texture when certain conditions were met (for example, changing it from a normal pupil to a small one (representing an angry / shocked emotion) and then changing it back to the normal one again when the player was no longer expressing that emotion on their real-life face.

The latest version of the pupil system would use a more complex form of the If statements that would check the angles of objects outside of the one where the angle-checking script was located.  This would help to ensure that the pupil change only happened occasionally and so was kept interesting and special to the player, and more meaningful (since over-use of the change would make it boring). 

The plan was that the "angry / shocked" pupil state would only happen when (a) the angle of the eyebrow's bending joint was in a range where it was making the "angry" eyebrow expression; and (b) the lower lip of the avatar was at an angle where it was making an "angry" expression with the lip.  the change would not trigger if the eyebrow alone was angry, or the mouth alone was angry.

I will provide a quick guide to how I set up the avatar eye pupils for texture toggling via scripting before I explain the setup of multi-object checking If statements.

STEP ONE

The avatar eye consists of a sphere with a TrackingAction inside that rotates the eye in the X, Y and Z directions in response to movements of the nose-tip landmark.  The texture of that sphere is an all-white texture to represent the white of the eye. 

Childed to that rotator is a  copy of that same sphere, the exact same size and overlaid on the exact same coordinates but lacking a TrackingAction to move it.  It is on the childed sphere that the pupil texture is overlaid.  The pupil is a light-reactive (Specular) transparent PNG, with the white area cut out so that the white texture of the parent sphere can be seen through it.

Having a separate matte white eye and shiny specular pupil means that, like a real eye, the pupil can appear to shine when light falls on it whilst the white remains relatively non-shiny.  This helps to avoid a common flaw of CG computer animation cartoons where the entire eye is shiny and looks obviously wrong to the viewer - once you notice it for the first time, your mind can never stop focusing on it instead of on the story.

1.jpg

STEP TWO

Once the eye is set up, the childed pupil object can be selected and have an "image array" C# script created in it that will change the pupil texture that is displayed when an activation call is sent to it from another script.  Here is the C# image array script used in my pupil.

	public Material[] materials;

	public void EyeExpressionNormal () {

GetComponent<Renderer> ().sharedMaterial = materials [0];
	
	}
	

	public void EyeExpressionSmall () {

GetComponent<Renderer> ().sharedMaterial = materials [1];
	
	}
}

The script basically tells the 'Renderer' component of the object - that controls whether the object is visible or not - to change the currently displayed image - or 'material' - of the eye pupil to the one stored in a specifically numbered slot of the image array.  The array is like a database of textures where you can store a number of different images inside a single script. 

A new instruction that you may not have encountered before even if you are used to writing C# scripts is the "public Material[] materials" command at the top of the script.  The purpose of this is to tell Unity how to handle variables that relate to the storage of textures instead of numbers and letters.  Without this line being present, the array will not function.

You will also notice that in my script, I have defined two separate functions (EyeExpressionNormal and EyeExpressionSmall), each with e Renderer instruction that refers to the storage slot in the array that a particular texture is assigned to.  This enables us to change the texture that is currently displayed just by putting a call to a specific function in the script that will send the toggle activation.

For example, to call the EyeExpressionSmall function that contains the small pupil texture, we would simply but in our activation script something resembling the following:

GameObject.Find ("Left Eye Rotator - Shiny Pupil").GetComponent<EyePupilExpression>().EyeExpressionNormal();

The above line tells Unity to (a) look for the texture array in a script called 'EyePupilExpression' inside the 'Left Eye Rotator - Shiny Pupil' object, and (b) jump down to the sub-function of the script called 'EyeExpressionNormal' instead of running the script from its start position at the top of the code listing.

STEP THREE

After saving our image array script, checking the Inspector panel settings for the eye pupil object revealed that an option setting called 'Materials', set to 0 by default, was now available.

3.jpg

By putting the value '2' into the 'Size' text box, two slots - or 'Elements' - for the storage of images were created in the image array.  By left-clicking on the small circle icon at the end of each Element row to bring up a Material Selector pop-up window, the normal-sized pupil texture (called 'pupil-alone-shiny') was assigned to Element 0, and the small pupil texture that we wanted to be able to toggle to ('pupil-small-1) was applied to Element 1.

4.jpg

Important note: a Material version of a texture is created in Unity the first time that you apply it to the surface of an object.  If you have not done this before trying to assign a texture as a Material to the image array, there will be no Material of that texture to select.  

An easy way to ensure that a new texture has a Material is simply to apply it to any object as soon as you have imported it into Unity and then use the Ctri-Z keyboard shortcut to undo the application and restore that object to its previous state.  That way, you have done no harm to your project and the Material is now created in your project, ready and waiting for the time when you actually need to use it.

Once your image Materials have been defined in the array then you can send activation instructions from another script to the sub-function of your array script that contains the definition for that array slot.  For example, if you wanted to activate the function containing the details of the normal eye pupil texture:

GameObject.Find ("Left Eye Rotator - Shiny Pupil").GetComponent<EyePupilExpression>().EyeExpressionNormal();

STEP FOUR

Once the object whose texture is to be toggled has been set up (the eyes, in this case), we can set up the C# script containing the 'If' logic instructions that will check whether certain conditions have been met before it is allowed to send an activation instruction to a particular Element slot in the image array.

This checker script is created inside the object whose state is the main condition for deciding whether activation should progress or not.  In the case of my project, I wanted it to check whether the main rotation joint in the eyebrow was rotated at an angle corresponding visually to the 'angry' downward shape of the eyebrow.  If the current angle of the joint was within this range then the activation signal would be sent to the 'small' pupil texture in the image array.  If the angle moved outside of this range then the pupil would be changed back to the 'normal' texture, if it was not already in that state.

5.jpg

Here is the script of If statements and image-array activation statements that I used in my project.

private float eyebrow_expression_normal_to_small_x1;
private float lip_expression_angle;

void Update () {

eyebrow_expression_normal_to_small_x1 = transform.eulerAngles.x;
lip_expression_angle = GameObject.Find("Left-Side Mouth Rotator").transform.eulerAngles.x;

		if (eyebrow_expression_normal_to_small_x1 > 305 && eyebrow_expression_normal_to_small_x1 < 315 && lip_expression_angle > 0 && lip_expression_angle < 10) {

			GameObject.Find ("Left Eye Rotator - Shiny Pupil").GetComponent<EyePupilExpression>().EyeExpressionNormal();
			GameObject.Find ("Right Eye Rotator - Shiny Pupil").GetComponent<EyePupilExpression>().EyeExpressionNormal();

		}

		if (eyebrow_expression_normal_to_small_x1 > 315 && eyebrow_expression_normal_to_small_x1 < 359 && lip_expression_angle > 10 && lip_expression_angle < 20) {

			GameObject.Find ("Left Eye Rotator - Shiny Pupil").GetComponent<EyePupilExpression>().EyeExpressionSmall();
			GameObject.Find ("Right Eye Rotator - Shiny Pupil").GetComponent<EyePupilExpression>().EyeExpressionSmall();
	
	}
}
}

If you have followed this article from the very start, at the top of this page, then much of the script will already be familiar to you and does not need explaining again.  The main difference in this script compared to the previous ones is the increased complexity of the 'If' statement, and it is that aspect that we will explain here.

In the earlier scripts, we only checked the angle of the TrackingAction-powered object that our checker script was housed within.  To enable the script to also check the angle of an object outside of where the script is located however, we need to do two things:

(a) Create a variable for the result of the check of that other object to be stored in.  In my project, this was the main TrackingAction-powered rotation object in the bottom lip of the avatar.  I therefore defined a 'private float' variable called "lip_expression_angle" at the top of the script to store the rotation angle of that lip component in.

(b) Tell Unity where to look for that rotating object in the project so that its angle can be checked.  The script was told that once it found the object using the location description that we gave it, its angle should be stored inside the 'lip_expression_angle' variable.  When all of these elements were brought together, the line for carrying out the instruction looked like this:

lip_expression_angle = GameObject.Find("Left-Side Mouth Rotator").transform.eulerAngles.x;

Breaking down the structure of the right-hand side of the equation, we are saying that the script should look at the lip rotator object called 'Left-Side Mouth Rotator', and examine only the rotation - or Transform - of its X rotation axis (also known as 'eulerAngles'x').

Going down further in the script listing to look at the structure of the 'If' statements: although it looks intimidating, it is actually very similar to what we have explored earlier in this article.  Before, we were using If statements that contained two checking instructions.  For example:

if (eyebrow_expression_normal_to_small_x1 > 305 && eyebrow_expression_normal_to_small_x1 < 315 ) {

When adding the angle checks for other objects, we are essentially just pasting a similar instruction onto the end of the previous equation, to turn it from a 2-part one to a 4-part one.  

First, you place an '&&' instruction - which means And - at the end of the first pair to tell the script that it is not the end of the If statement and there are further instructions to check: 

if (eyebrow_expression_normal_to_small_x1 > 305 && eyebrow_expression_normal_to_small_x1 < 315 &&  )

And then you insert the information for the second pair of conditions to check directly after the && instruction:

if (eyebrow_expression_normal_to_small_x1 > 305 

&& 

eyebrow_expression_normal_to_small_x1 < 315 && lip_expression_angle > 0 && lip_expression_angle < 10) 

{

The mistake that one is most likely to make is in the placement of the normal brackets - ( and ).  The opening bracket goes directly after the word 'If' and the closing bracket goes directly before the squiggly opening bracket '{' at the very end of the If statement.  Basically, all of the conditions that you want to be checked must be enclosed within the opening and closing normal brackets ( and ), with nothing outside of it except If at the start and { at the end.

The If statements of the script check these conditions:

(a) If the angle of the eyebrow joint is between 315 and 359 degrees, AND the angle of the lip joint is between 10 and 20 degrees, THEN send the activation command to the image array to change the eye pupil texture to the Small one.

and (b) If the angle of the eyebrow joint is between 305 and 315 degrees, AND the angle of the lip joint is between 0 and 10 degrees, THEN send the activation command to the image array to change the eye pupil texture to the Normal one.

By making the checker script examine more than one object's condition before the texture of the target object is changed, we can create complex animation sequences controlled by the user in real-time that closely resemble non-interactive cartoons and animated movies.  You can also store in a variable the current emotional state of an avatar's facial part so that other objects - such as non-player characters in a game - can "read" the user's facial emotions via the variable and respond to them.  

CONCLUSION

Here are some images of the default eye state of the avatar's pupils, and what the pupils change to when the detection conditions are satisfied.

6.jpg

0 Kudos
MartyG
Honored Contributor III
525 Views

My research based on "If" logic statements has developed a new variation on my rotation limiter routine in Unity that provides smoother, more natural resistance to moving past a certain angle of rotation.

Up until this point, I have been freezing rotation by changing the smoothing type of an object from 'Weighted' to the stationary 'Stabilizer' when the object rotated into a certain range of degrees (e.g between 300 and 340 degrees).  Whilst this worked, it could result in a small jump in the object's motion when it switched back to the 'Weighted' type and unfroze.  I thought that there had to be a better way to limit rotation.

I knew that one of the factors in how the 'TrackingAction' SDK tracking script calculates the rotation of a camera-controlled object is to look at the quaternion angle of an object and then invert it so that a positive-value angle becomes a negatively valued one.

1.jpg

It therefore seemed reasonable to assume that if eulerAngles was made equal to eulerAngles (instead of minus eulerAngles) when the object rotation entered a certain range then the object's direction of rotation would be reversed when the user made control input to the camera.  this would move the object backwards away from the limit zone.  

As soon as the object had left the defined zone, the angle checker - which is continuously looking at the object's degree angle - would revert the controls back to their normal direction, since the condition of the If statement was no longer true once the object had moved out of the limiter zone.

I added the following C# code directly beneath the above eulerAngle statements.  It is an 'If' logic statement that continuously checks the value of the Z axis of an object and makes the eulerAngles value equal to its original value instead of inverting it if the angle falls between the range of greater than 200 degrees and less than 300 degrees (i,e between 200 and 300 degrees). 

eulerAngles.x = -eulerAngles.x;
eulerAngles.y = -eulerAngles.y;
eulerAngles.z = -eulerAngles.z;


if (transform.eulerAngles.z < 300 && transform.eulerAngles.z > 200) {

eulerAngles.z = eulerAngles.z;

If you wanted to affect the X axis or the Y axis of an object in your project, you would change the .z on the end of the eulerAngles statement to eulerAngles.x or eulerAngles.y.  

Once I had confirmed that the code worked, subsequent tests revealed that it works best with objects that use a single axis direction instead of one that can travel in two or three directions.  This is because travel in one direction can alter the angle of another of the axes and so risk triggering the freeze action at the wrong time.  I discovered this when I tried the code in the shoulder of my full-body avatar: I was attempting to limit how far the shoulder could swing backward in the Z direction, but moving the arm sidewards in the X direction would also alter the value of Z and cause arm movement to stutter.

The single-axis object that I chose to use the code in was the waist-bending joint of the avatar that allows it to lean forwards to touch the floor with the hands and then rise up to vertical standing position again.

With the 'If' statement added to the TrackingAction script inside the waist bending joint, the avatar was able to bend down to the maximum angle of travel that we had defined (300 degrees) and then automatically gently rise upward a little from the danger zone in the reverse direction as the value of eulerAngles was changed by the 'If' condition from minus to plus and the camera interpreted further downward movement of the real-world head as an order to rise up away from the floor.

2.jpg

Such a rise is consistent with the natural movement of the human body, which will tend to back off slightly from an extreme pose if a person tries to hold themselves still in that pose, because strain begins to set in to the limbs.

0 Kudos
Reply