Software Archive
Read-only legacy content
17061 Discussions

Complete RealSense Unity project diary released for free

MartyG
Honored Contributor III
1,173 Views

Hi everyone,

During the development of my company's RealSense app in Unity, we kept a deeply detailed development diary that shows step by step how we created our entry from start to finish.  Now the deadline has passed, we have released the entire 40 episode series for free on our website, with a final deadline-day episode to be posted tomorrow.

We freely offer all of the information and experience about the technologies we developed during the project so that others can build upon them in their own RealSense projects and create even greater apps, especially in the coming year during the inevitable App Challenge 2015.

We humbly hope that the RealSense community, and even non-RealSense Unity developers, gain benefit from the diary.  Good luck in the contest!

http://www.sambiglyon.org/fatherdiary

0 Kudos
8 Replies
Vidyasagar_MSC
Innovator
1,173 Views

This is awesome Martin and team. At last i got a guide to try RealSense on Unity.

Keep us posted more about your work.

0 Kudos
MartyG
Honored Contributor III
1,173 Views

Thanks so much, Vidyasagar!  We feel that we probably discovered things about RealSense in Unity that hadn't been known about before (such as the 'TrackingAction' tracking script's Virtual World Box and Real World Box settings affecting an object's behavior even when the script is "un-ticked" to disable its motion tracking.)  So it's our hope that some of our research work gets built upon and incorporated into a future SDK in a more user-friendly form that anyone can use immediately.  *smiles*

 

0 Kudos
MartyG
Honored Contributor III
1,173 Views

Although our dev diary contains step by step details of how to build in your own Unity projects the RealSense tech that we developed, I thought that I would summarize some of the discoveries we made.  

Some of the systems we built (moving mouth, eye expressions and eyebrows) are featured in other camera-control software and so - since they are not unique to RealSense - we will focus on the innovations we developed that were particularly innovative.  If you want to implement these facial systems in your own project though, full details are in the diary series.

*  A 'TrackingAction' tracking script's settings in the 'Inspector' panel of Unity can affect an object's behavior even when it has been un-ticked to disable its motion tracking.  In this way, you can gain a deeper level of fine-control over the movement of an object that you don't necessarily want to actually be controlling directly with the camera. 

This may be true for other types of Action script such as 'TranslateAction' as well but we did not have the opportunity to test this in our own project.

*  The RealSense SDK recognizes feet / toe and knee joints as well as hands, allowing you to control virtual legs and feet by assigning their objects to a hand control.  The knee is treated like a giant palm joint, allowing crude movement of an object by moving a knee around in front of the camera.

A drawback of this is that if you assign the legs to one or both of the hands then you have to keep the hands out of view of the camera (e.g by putting them behind your back) or it will conflict with the leg control as the SDK will take control inputs from both the legs and hands at the same time, making the object's control difficult.  

As I mentioned in a forum post about foot control on a previous occasion, I hope a future SDK will have a separate set of foot-control definitions, since it should be a relatively easy thing to implement in the code (as the camera recognizes and treats the feet exactly the same as the hands, so all that should be needed is some specific foot joint labeling like 'Big Toe Base' and 'Little Toe JT 1'.

If a project could support two RealSense cameras, each with their own unique settings, then one camera could read the upper body and the second camera the lower body, making full-body arm / hand and leg / foot control much easier.

*  RealSense arm control is not limited to the hands.  It is possible to construct a complete working arm (bendable fingers, a turnable wrist, lower arm that bends up / down and swings left / right and a shoulder blade / joint that moves the whole arm up - down - inwards - outwards.)

Of course, RealSense does not detect non-hand and wrist joints by default, so to program control of the upper and lower arm sections you need to think in terms of what the hand is doing when you move these parts on your real body.  For instance, in our project we were not making use of the pinky finger (as our avatar was a three-fingered squirrel-guy).  Through long anatomical study and observation during our research, we saw that it is impossible for the lower arm to bend up and down without the fingers lifting and dropping with it.  So we programmed the lower arm's elbow joint to rotate up and down whenever the pinky finger moved up and down, and swing left and right when the pinky moved left and right.

We applied this philosophy throughout our avatar.  If a part we wanted to control did not have a trackable joint then we observed the anatomy of the real-life body and identified which trackable parts moved at the same time as the part we wanted to control that was not trackable (e.g we built an eye expression system that made sad, neutral or angry expressions depending on the position that the lower left side of the lip was in when the player was expressing those emotions.)  By doing so, we were able to construct an avatar that mirrored the movements of the player. 

*  Action scripts like 'TrackingAction' do not have to be used on an object directly to get the control results that you need.  Instead, you can camera-control one object in a way that causes it to be able to communicate with non-camera scripts in other objects in complex ways.

In our project, we created a "control room" beneath the visible game environment.  This control room contained 'cages' containing a simple slider block in its center that moved up and down in its shaft in response to hand and face inputs to the camera.  At the sides of the cage were 'trigger blocks' that projected a "collider field" into the shaft.  Whenever the slider block passed through one of these fields when moved by a camera input, it activated trigger scripts inside the blocks that sent activation signals wirelessly over infinite distance to scripts inside other objects in the game.

http://sambiglyon.org/sites/default/files/control.jpg

The main way in which we used the cages was to send activation instructions to non-camera rotation scripting inside the avatar body parts.  Each time the block passed through a field, an activation signal was fired once, rotating the body part in whatever direction and speed the non-camera rotation script inside it had been set up for.  In this way, the player could precisely control how far they moved an avatar limb - the stronger the movement they made with the hand or face, the more trigger fields the slider passed through in the cage shaft, each time firing off an activation signal to the limb's rotation script to move the object X degrees left, right, up or down.

For this reason, we had cages in a variety of formats ranging from 2 trigger blocks at the side (a quick up / down) or 12 blocks (to create long movements that required multiple firings of the rotation script for the object to reach its destination - like an arm moving from being dropped at the side of the body to being lifted up to the head.)

You may be wondering why we went to the trouble of building such a system when the 'Trackingaction' script alone can produce such movements.  The reason is that using the control cages makes it far easier to persuade an object to travel in a straight line instead of wandering off diagonally during its travel.  Also, they help to greatly reduce occurrences of mis-rotation (an object moving when you do not want it to because of the hand being turned sidewards or moved forwards and back in front of the camera.)  

Our initial camera experiments found that precise control of an arm and its individual sections - precise enough to be used in a professional simulation environment - was impossible due to mis-rotation always taking the arm away from where we were trying to move it to, and so we built the control room to solve this issue.  

Hopefully mis-rotation will be eliminated in a future SDK (perhaps by the simple provision of tick-boxes in the Inspector panel that allow a developer to specify when they don't want an object to move when a hand side-rotation or forward-back movement is detected by the camera.)  In the meantime, our slider system enables developers to make finely controlled simulations that were not feasible before.  See the diary for full details of how to build one in your own project.

*  There was one area where forward-back mis-rotation was actually a big advantage though (hence our reasoning that it should be able to be switched on and off with tick-boxes instead of eliminated from the SDK altogether.)  Once we saw that our avatar tended to lean forwards and backwards when the head moved towards or away from the camera, we realized that it could be used to make an avatar that could actually bend over at the waist when the head was pushed forwards or back.

We therefore divided our avatar into upper and lower body sections.  As the human body bends over the stomach, everything above the stomach was assigned to the upper body, and the stomach and everything below (groin area and legs) were assigned to the lower body.  By putting a 'TrackingAction' script in the head of the avatar, and making all of the other objects in the upper body subservient to the head via the process of 'parent and child' linking, we could make the entire upper body bend forward and back when the head was moved towards the camera or away from it.  The body bent over exactly like a real person, with even the shoulders moving.

*  Head forward-back mis-rotation also makes it possible to control the movement of the avatar by leaning forward and back in the chair to make the avatar move forwards and stop, and rotate the avatar to go in a different direction by turning the head left and right.  In the end though we had to resort to arrow-key keyboard control for this, as it was difficult to stop quickly and we kept flying out of windows!  Also, as the neck can only turn a certain amount, we could only do quarter-rotations of the avatar with neck-turning and so it made changing direction 90 degrees impossible.  This could certainly be fixed though, given more time (which we ran out of.)

*  When a human body bends up and down, the breasts and muscle / fat also compress and decompress as they are lifted and dropped.  We wanted to replicate this muscle and fat motion in our avatar.  We did this simply by putting a copy of the 'TrackingAction' used in the head in the torso parts too (chest and stomach) so that when the player bent forward in their chair the torso parts moved down, and when the player leaned back in the chair they rose up again.

As the chest, stomach and groin were each independent objects, they merged in and out of each other as they rose and fell, giving the impression of fat and muscle changing form in response to the avatar's body poise.

********

We look forward to seeing what developers will do with the RealSense methods that we designed.  Best of luck!

0 Kudos
Vidyasagar_MSC
Innovator
1,173 Views

Thanks for taking out time to post this awesome explanation.It helps a lot moving forward.

0 Kudos
MartyG
Honored Contributor III
1,173 Views

I would also add that Netflix was an invaluable animation research tool for us.  We had a list of animated shows that we watched certain parts of every night for months, each time having new inspirations about how to replicate realistic character body movements and facial expressions with our RealSense powered avatar.

The shows (mostly anime) included My Little Pony: Equestria Girls, Full Metal Alchemist: Brotherhood, Attack On Titan, Fairy Tail and many others.  

Our ultimate benchmark for judging the success or not of our camera-powered avatar was whether a player could use it to replicate the broad range of movements in the cafeteria song from Equestria Girls.  For the most part we succeeded.

https://www.youtube.com/watch?v=A4WLuR70ZO4

0 Kudos
AndreCarlucci
Beginner
1,173 Views

Wow Marty, fantastic!

The community really appreciate that.

0 Kudos
simsam7
Beginner
1,173 Views

Hi Marty,

Thank you for taking the time to put this out there for the community. Some of your links on your site, as well as the link you posted above: http://www.sambiglyon.org/fatherdiary does not work. This link on your site also does not work: http://www.sambiglyon.org/drupal/?q=node/1296

I'd love to see the full diary, it looks like you really put a lot of effort into it.

Regards,
Sam

 

0 Kudos
MartyG
Honored Contributor III
1,173 Views

Gah!  It looks like the entire site has gone down!  It was fine when I went to bed.  It's likely something happening at the hosting company's servers.  I apologize - thanks for the heads-up about the downtime.

Edit: it seems to be primarily problems with URLs using friendly shortcut words (like 'fatherdiary' or containing the word 'drupal'.  I tracked down a version of the URL that works.

http://sambiglyon.org/?q=node/1369

0 Kudos
Reply