Software Archive
Read-only legacy content
17060 Discussions

RealSense Avatar Animation Stress Test

MartyG
Honored Contributor III
493 Views

Hi everyone,

We are well aware that getting one aspect of a game performing excellently is only part of the battle won.  The true test is when different systems are brought together and are running at the same time.  In these circumstances, an avatar with beautifully moving arms in test situations may move like treacle when the rest of the avatar mechanics and the game world around it are being processed as well.

For this reason, we thought it would be a good idea to perform a stress test of our avatar in Unity at the maximum 'Fantastic' graphics quality level with limb and facial animation, body fat / muscle animation and the physical collision system all active.  Although the movement of the limbs was slower than our previous test in which we controlled the limbs alone, this was to be expected and the RealSense camera coped very well under the load of the many inputs being imposed on it from multiple sources.

https://www.youtube.com/watch?v=bGGlbc0yk2E&feature=youtu.be

The main lesson from the test was a confirmation of earlier findings that whilst the facial animation and muscle animation perform well when the limbs are positioned low down, those systems can cut out temporarily when the hands are raised to lift the arms high because the hands obscure the camera's view of the facial tracking points that drive the face and muscle animation.  

We had observed that in Intel's demonstrations they tend to have the camera positioned atop the monitor, whereas we place the camera on the desk.  We understood the logic behind this elevated position, as it allowed the face to be read clearly because the hands did not have to rise up as high when doing large arm swings.

So we re-tested with the camera atop the monitor and although the limbs now moved as fast during the "everything on" stress test as when we had been testing them in isolation - validating our ongoing belief that the camera is a powerful beast - it was actually more tiring to play that way.  This was because the hands still had to be held up higher than before whilst in a sitting position for the camera to be able to see them from its elevated position on the monitor.  

The higher the real-life hands are lifted up, the more that gravity acts on the player's arms to try to pull them down again, whereas the gravity tug is more gentle when the camera is lower down on a desk below the monitor.

Another factor that made control more awkward when the camera was on top of the monitor was that the player becomes actively conscious of the camera's presence as they look upwards towards it and then they start thinking about controlling the individual systems like the arms and face.  This is a huge immersion breaker.  When the camera is down low on the desk however, the mind is less aware of it and the player is able to control the avatar effectively because they are focused on the avatar and not the camera.  

This means that the avatar can really come alive, because the camera takes care of automating the complex animation via the readings it is taking from the player whilst the player enjoys the game.  The principle is akin to making a TV reality documentary where the participants are asked to "act natural".  When they can see the cameras they never do, but when the camera is out of sight they soon forget about it and behave as they usually would - for better or worse!

 

0 Kudos
0 Replies
Reply