Unity Tip: Easier Facial control of Onscreen Objects With RealSense
I found a simple and effective way to minimize loss of tracking for face-controlled objects in Unity, especially those where the camera is in a fixed position and does not follow behind the object. It can conceivably be applied to non-Unity applications that use facial tracking too.
If you train your users to keep their eyes loosely fixed on the position of your face-controlled object then it seems to greatly reduce occurrences of dropped tracking. For example, in my company's full-body avatar project, I found that the avatar - whose motion is driven by the nose-tip - could be walked and steered around a room at the same time without stalling at all if I kept my eyes focused on the approximate position of the avatar's head and followed it around the room as it walked.
Part of the explanation for why this control improvement occurs could be that the user's attention tends to drift when using camera-controlled applications, especially when tired, and the head subconsciously turns away from the screen or moves towards or away from the camera. This can cause tracking loss.
If the user is trained to loosely follow a specific object on the screen however then their face remains relatively centralized in front of the camera instead of looking away to the side of the computer or dipping towards the floor,.