Software Archive
Read-only legacy content
17060 Discussions

Real-time avatar arm side-arcing using RealSense and Unity

MartyG
Honored Contributor III
836 Views

Hi everyone,

I'd like to share a quick 4-second YouTube clip of my company's latest progress in RealSense-driven avatar tech created in Unity - the swinging of an avatar arm sideways across the front of the body in a real-time action that mirrors the player's arm moving in front of them.

https://www.youtube.com/watch?v=7ILdnZvGi7E&feature=youtu.be

0 Kudos
4 Replies
samontab
Valued Contributor II
836 Views

That looks fun!...

Now I want to make a "magic mirror" that slightly changes something in the person :)

0 Kudos
MartyG
Honored Contributor III
836 Views

I've long planned to create such a mirror myself for my avatar.  An easy way to do it in Unity is to make the surface of the mirror a Render Texture and put whatever a game camera is seeing onto its surface.

http://docs.unity3d.com/Manual/class-RenderTexture.html

This episode of my app's diary series shows how to set up a Render Texture.

http://sambiglyon.org/?q=node/1367

You could then overlay a semi-transparent sheet on top of the mirror that shows various reality-changing effects and images.

0 Kudos
MartyG
Honored Contributor III
836 Views

I improved the dual arm side-swinging system and its stability a lot on Monday, to the extent where I was able to record 30 seconds worth of footage of the system in action before a serious glitch happened.  It'll get better and more refined in the coming weeks.

 https://www.youtube.com/watch?v=jCT1umewRME&feature=youtu.be

Of course, the reason any Japanese anime fan wants an avatar that can move its hands together is to perform the Kamehameha technique!  My squirrel-guy was so close to mastering it   :)

https://www.youtube.com/watch?v=eg1bCXFLKdY

0 Kudos
MartyG
Honored Contributor III
836 Views

I did a new tech test this morning to further push the processing capabilities of the RealSense cam.  I made a copy of my avatar that used the exact same scripting and pointed it and the original face to face to see if they could demonstrate the potential of RealSense for simulations involving emotional intimacy (I mean hugging and kissing before you get your hopes up, folks!)

I also wanted to see how RealSense handled two complex avatars at once (i.e multiplayer).

If you are at work, best not click on the video below of two male squirrel-guys without clothes hugging and kissing (only because I haven't got around to making a lady squirrel or clothes yet, so I'm working with the materials I have available to advance RealSense science).  :)

https://www.youtube.com/watch?v=2m2pMTjPnk8&feature=youtu.be

It can be observed that getting the camera-controlled arms to hug can be quite difficult because the two avatars are accessing the exact same scripts and control systems simultaneously and so are obviously fighting for control of who gets to use the script at a given moment in time.  Things will likely be a lot smoother once each avatar has its own unique set of scripts.  RealSense is a powerful beast when you push it hard, so I don't doubt it's up to the job.

Edit: we gave each avatar its own individual control scripting and it definitely resulted in improved performance, though still somewhat awkward compared to single-avatar control.

https://www.youtube.com/watch?v=2IrgwPdgK-g&feature=youtu.be

We will keep working on prototyping complex-interaction local multiplayer.  It may turn out to be the case though that the optimum situation will be online multiplayer where players meet up in a shared environment, so that each individual avatar is processed by the RealSense camera of the player at their home location.

Edit 2: I found later that I had the Smoothness weighting on the shoulder joints turned up twice as high as they needed to be.  The two-avatar animation was noticably smoother once I turned the joints down from '8' to '4'.  Oops!

0 Kudos
Reply