Software Archive
Read-only legacy content
Announcements
FPGA community forums and blogs have moved to the Altera Community. Existing Intel Community members can sign in with their current credentials.
17060 Discussions

Demo of avatar mouth and lip movement with the RealSense camera

MartyG
Honored Contributor III
593 Views

Hi everyone,

Below is a link to a YouTube clip of the latest build of our RealSense-powered Unity avatar's mouth and lips.

https://www.youtube.com/watch?v=9GlDWj44-B0&feature=youtu.be

For our up and down mouth motion, we use a TrackingAction script that is constrained so that only the vertical Y position axis is unlocked.  It activates when movement of the left side of the mouth is detected.  The TrackingAction that lifts the mouth up and down is a simple small sphere object placed just above the mouth, and all of the pieces of the mouth are child-linked to this sphere so that they all move up and down together with the sphere. 

http://sambiglyon.org/sites/default/files/mouthmotion.jpg

This tracking point was chosen because it is the part of the mouth that moves the most when a person's real mouth smiles or grimaces.

The sides of the lip are a single object on each side, whilst the upper and lower lips are made up of three pieces - left, right and center - to provide maximum expressiveness when the TrackingAction flexes them in response to detection of movement of the left side of the real-life mouth.

The lips are animated by putting a TrackingAction in each lip piece that is constrained to roll vertically up and down in the center-pieces, and rotate up and down in a sidewards direction in the left and right pieces to represent how the corners of a happy of serious / angry mouth turn up and down.

The center-piece of the upper and lower lips acts as the parent object, with the left and right pieces child-linked to it.

Here is the settings of the center-piece of the lip.

http://sambiglyon.org/sites/default/files/mouthmotion2.jpg

And here is the settings of the side-pieces.

http://sambiglyon.org/sites/default/files/mouthmotion3.jpg

Finally, each of the lip pieces has a Rigidbody component with a high mass of 10,000 and 0 drag and gravity enabled to give the objects strong forward momentum when moving and minimum resistance to movement.  All constraints are locked, because although a TrackingAction camera script ignores a Rigidbody's contraints and uses its own instead, having the constraints locked prevents the lip pieces from being knocked off the face by collision with another object.

http://sambiglyon.org/sites/default/files/mouthmotion5.jpg

0 Kudos
2 Replies
MartyG
Honored Contributor III
593 Views

Here's the latest version of our RealSense-powered mouth movement and lip-shape control system!

https://www.youtube.com/watch?v=1UF3Y3WfgQY&feature=youtu.be

0 Kudos
MartyG
Honored Contributor III
593 Views

In another post on this forum, we provided a how-to guide for using more than one TrackingAction script in a single object to amplify the strength of motion-generated object movements even further and make complex joints with a single object.

Here's a link to that guide: https://software.intel.com/en-us/forums/topic/549805

We have upgraded our game avatar with this latest advancement and made a YouTube video that showcases the difference that the ability to set up multiple TrackingAction scripts inside the same object make to the facial expressivenes of features such as the mouth, lips and eyelids.

https://www.youtube.com/watch?v=VYJVz7qQjKA&feature=youtu.be

0 Kudos
Reply