Software Archive
Read-only legacy content
17061 Discussions

Unity : Some problems

NIKOLAOS_P_
Beginner
745 Views

Hi,

I recently started dabbling on RealSense in order to integrate it to an existing project currently working with a different 3d sensor.

1) The cpu....I've read countless times that it has to be 4th gen. What does that really mean for other cpus? Can it become the cause for misbehavior with event sources? For example, I have troubles with events. For example, I'm trying to implement a gesture grab/handopen translation or tracking action as a means to implement a drag and drop of ui elements. Some times the events misfire or fire more than once etc.

2) Trying to implement a ui cursor like movement with gestures. Like drag and drop of ui elements using grab and fingers spread gestures etc. For some reason it doesn't seem to work well. I've succeeded in making a ui cursor with an override of the Tracking Action and hand detected/lost as start stop events. But when I try to add the grab/spread gestures to do that the translation(since I constrain it to no rotation and only translation on x,y axes) lags a fair bit. Kinda like the start stop events get misfired or fired more than once like I said before. So even with a 20 weighted smoothing it gets laggy. With the hand detected/lost and the above configuration (plus RBox center = Vec3(0,0,30) RBox dimensions = Vec3(50,30,30) ,VBox center = Vec3.zero and VBox dimensions = Vec3(screen width, screen height, 0) ) I can simulate the cursor pretty well.

Tried some of the tips Marty G. has given over the forum , like setting VBox to 0s and/or the RBox. But to no avail.

Also tried using the hand closed/opened but they didn't behave any better than grab/spread.

Forgot to mention, using unity 5.0.3f2 64bit.

-Nikos

0 Kudos
13 Replies
MartyG
Honored Contributor III
745 Views

1.  The F200 desktop camera supports 4th Generation Haswell processors or the newer Intel Core processors (e.g 5th Generation Broadwell and the forthcoming 6th generation Skylake).  Some Intel processors lack the Core branding (e.g the Pentium type processors) and these will not work properly with RealSense.  So you need to make sure it's a Core brand processor.

The R200 mobile camera's developer kit, meanwhile, will support the Core chips, plus the Intel Atom Cherry Trail CPU for embeddables.

I don't think the gesture misfires are due to the processor generation.  I think they just tend to be over-sensitive, with the sensitivity depending on how easy it is for the camera to see a particular face or hand landmark.  It's an issue I have to find ways of dealing with in my own projects.

For example, if you use the left and right eye turning gestures then they are extremely easy to trigger.  The camera also frequently mis-triggers the finger spread gesture.  Whereas if you use the thumb-up or thumb-down, they are much less likely to mis-activate because you have to put your hand into a specific finger-thumb arrangement before the camera recognizes the gesture.

2.  Regarding my tip about setting the Real World Box and Virtual World Box to zero: whilst you can do this, it is no longer as necessary to do so as it used to be as Intel fixed the stability issues in the previous R3 SDK release (the one before the current R4 release).  Having your Real World Box left on the defaults (50 for Center and 100,100, 100 for position) can actually enhance control stability now.

My personal experience is that the Virtual World Box has only really been useful for position-based movement.  With rotation-type movement, the values seem to have little effect on an object's behavior and may as well be set to 0, 0, 0.

Edit: it's worth mentioning that an exception to the 'VWB doesn't do much for rotation' rule is if you have an object with Z-depth that moves backwards and forwards.  If you are finding that an object that uses rotation leaps backwards when the hand is detected, setting the VWB to 1, 1, 1 can greatly reduce this snap-back of the object.  

0 Kudos
NIKOLAOS_P_
Beginner
744 Views

*dramatic and mysterious tone* I've been expecting you.

Right I forgot to mention some details. First off I'm only using F200. My cpu at work is ,if i'm not mistaken, a 3rd gen(i7 3820).

1) Indeed some of them are over sensitive. At first I was trying to see how reliable each one is using a Send Message Action. So I noticed some of them being overly sensitive like the eyes turning or the fact that the head gestures , albeit present, are in fact deprecated etc etc.

But apart from the fingers spread gesture which you say is indeed sensitive, I used various combinations mostly to do with "easy" patterns like the full hand closed or as a fist or open etc etc(with various tries when it comes to different factor values like the openness and so on). Still I get some misfires of hand open when it is in fact closed. Or sometimes , even with continuous tracking on, hand opened will fire if you do swift motions with an open hand which is the gesture set to stop the Tracking Action.

2) Rotation isn't a problem for now. Input is only meant to be used for interacting with 2D UI only. And setting the RBox to the default values didn't really help(in combination with the VBox defaults). In fact the one I presented is the only configuration so far working for what I wanted. On the other hand these were just values I somehow got right intuitively but have no idea if I need to apply some logic like fov math or something to properly widen it.

 

-Nikos

0 Kudos
MartyG
Honored Contributor III
744 Views

I've experienced the same misfires of open and closed hands that you have.   Finding an appropriate control gesture becomes in adventure in trying every gesture and settling for the one that misfires the least.  :)   I try to avoid using SendMessageAction for action triggering unless absolutely necessary, as gesture recognition tends to be far more sensitive than with other forms of landmark tracking such as TrackingAction.  I built my own trigger systems in Unity to get better control over script activations.

"Continuous Tracking" can be a misleading setting that you don't actually really need to use. I wasted a lot of development time because of problems that it caused me because I didn't properly understand what it did for a long time.  Really, it has little to do with tracking something continuously.  What it actually does is force a TrackingAction script to use Hand 0 for its tracking.  

So if you have an object that you want to be independently controlled with Hand 1 (the second hand), having Continuous Tracking ticked in that TrackingAction makes the object liable to ignore Hand 1's movements and get its movement info from Hand 0'  If you have one object controlled with the right hand and one with the left, this can cause "mirroring" where moving Hand 0 causes the other Hand 1 object to precisely copy its movements.  In my full body avatar, this meant that the two arms kept lifting up and down together in perfect sync instead of moving independently!

 

 

 

 

0 Kudos
NIKOLAOS_P_
Beginner
744 Views

I only used SendMessageAction to see debug log entries in the console and using the collapse feature, see the counters increase as I was using it. In some combinations the continuous flag helped stop the false positives and stabilized the event firing. But what you're saying about the index locking, is it hardcoded at index 0 or the index you're giving in the inspector?

Right now I have two ui objects (each with a background image and an icon image as children and one of them is a left hand cursor the other a right hand). Trying to define a good behavior for alternating between them when using the application. If I understand it correctly you were saying that if I want to use the right hand cursor with the right hand only and respectively for the left and also use continuous flags , I'm gonna see mirroring?

0 Kudos
MartyG
Honored Contributor III
745 Views

My own experience has been that when the camera can only see one hand, it treats that as Hand 0 whether you are using the left or right hand.  So if you have an object that you normally control with your right hand and you decide to raise your left hand to the camera instead, the left hand is treated like Hand 0, even if you normally use that hand as Hand 1.  Objects that are set to Hand 1 will not move.

When both hands are visible to the camera, objects set to Hand 0 and Hand 1 will move independently.  If you are using the Continuous Tracking option in your Hand 1 objects though, they will tend to sometimes stop obeying movements of your Hand 1 hand and follow the movements of the Hand 0 hand instead, causing the aforementioned mirroring.

It gets a bit more complicated!  The tracking behavior will very depending on what kind of ID you are using for tracking.  For instance, if you are using 'Track By ID', I have found that  the camera favors Hand 0, and tracking of Hand 1 will be lost unless you keep continuously moving your hands towards and away from the screen to re-activate Hand 1 tracking so that tracking does not stall a few seconds after the camera has lost sight of Hand 1.  If you use the Fixed type of ID though, tracking loss happens less frequently as the two hands are treated more equally.

In answer to your question: if you use Continuous Tracking on your Hand 1 objects then yes, you are more likely to run the risk of having your left-hand cursor move wrongly due to it following what Hand 0 is doing.

Edit: an alternative way to activate the Debug Log message would be to create an Empty GameObject and give it a box collider field, then tick the 'Is Trigger' option on the collider's settings.  This tells the collider to activate an event when an object enters it instead of trying to stop the object from passing through.  

The trigger will activate any script inside the Empty GameObject that contains an OnTrigger function type (for example, OnTrigger Enter - activate when the object enters, OnTriggerStay - run the script only for the duration that an object is inside the collider field, and OnTriggerExit - activate the script when an object that has entered the field leaves its boundaries.

Below the OnTrigger function in the script, you can put the Debug Log line of code, so that the message is sent to the debug log only when the enter / stay / exit condition has been satisfied.

0 Kudos
NIKOLAOS_P_
Beginner
744 Views

I'm not going to use fixed IDs since I have no idea which hand is going to be registered as 0 or 1 and what I need is the Left or Right hand detection.

Also I don't need both hands on the scene. In fact I only want one at a time. But when the user raises the left hand I want the cursor icon to change to the left one. Frankly it might be less of a pain to just use one object and mess around with the sign of the scale to change the cursor image from right to left and backwards. It's done like that anyway for the left cursor. And it will help avoiding the case where the user throws both hands in front of the camera and activating both cursors when i only need one.

By the way , what combination of gestures have you found the least sensitive so far, considering the type I need?

Thanks for your help so far , forgot to say it before.

-Nikos

0 Kudos
MartyG
Honored Contributor III
744 Views

Thanks for the thanks.  :)

The gesture I have found to be most resistant to accidental activation is Two Finger Pinch.

For the changing of the icon's appearance, you could use an Image Array script.  With this script, you can assign a particular texture to the script and when the script runs, it replaces whatever texture the object currently has.  So if you have two Image Array scripts, one with the right-hand texture and one with the left-hand texture, then you can switch back and forth between left and right icons when each script is activated.

Here's a JavaScript that makes a 'Materials' setting appear in the Inspector.  By expanding the setting open and putting a '1' in the 'Size' text box, you can create an Element slot where you can pick a texture to assign to the script.  The selector only recognizes Material type textures, so you will need to apply your texture to an object at least once (e.g on a test cube) so that Unity automatically creates a Material version of it that can then be selected and assigned to the script.

#pragma strict

var materials: Material[];
function Update () {

var mat: Material = materials[0];
GetComponent.<Renderer>().sharedMaterial = mat;

}

You use the exact same script for both the left and right hand icons, since it is in the Material section of the Inspector where you assign a unique texture to that script.

Here's an example of its use from my project, where I use arrays to change the phase of the moon each day from new moon to full moon and back again.

1.jpg

You could get SendMessageAction to activate this change by placing the SendMessageAction inside the same object that the array script is in and telling the SMA to look for a script that has Update as its function name (remember, SMA finds scripts by their function name instead of their file-name.)

0 Kudos
NIKOLAOS_P_
Beginner
744 Views

Really? The two finger pinch? I remember I couldn't quite get it right as a gesture. And I only managed to do the full pinch. Perhaps I need to try it again then.

0 Kudos
MartyG
Honored Contributor III
745 Views

Some of the gestures are activated in unexpected ways that you wouldn't guess at from their name description.  For example, with the Two Finger pinch, it doesn't seem to work if you have your fingers pointing sideways.  Instead, you have to point the tips of the pressed-together finger and thumb directly towards the camera (i.e move your palm towards the camera as normal and then clamp index finger and thumb together).   You also seem to get greater reliability of recognition if you put your hand at a height where it is level with the green light.

0 Kudos
NIKOLAOS_P_
Beginner
745 Views

And if you were to pair it with an event for stopping the tracking action,considering the pinch as a start event source for a drag and drop tracking action,which one would you choose?

 

0 Kudos
MartyG
Honored Contributor III
744 Views

I use a combo of Two Finger Pinch and Thumb Up, since there is a natural flow between making the Two-Finger gesture and then parting the finger-and-thumb and closing the fingers to make the Thumb Up gesture.  It's a combo where you barely have to consciously think about what you are doing with your hand.

You could also use Two Finger Pinch with a Gesture Lost condition so that tracking is active for the duration that you have the fingers pinched and then ceases when you break the pinch.  Using Gesture Lost has a higher risk of accidental triggering than using Two Finger Pinch and Thumb Up though.

Note: I thought I should add why I don't use a Thumb Up and Thumb Down combo.  Whilst in theory this is a logical combo, in practice the Thumb Down gesture is harder for the camera to recognize than Thumb Up, and it hurts the wrist a lot to hold your hand in the Thumb down orientation for more than a second whilst waiting for the recognition.  And you always want to choose gestures that are the most comfortable for your end-users to do, especially if the gesture needs to be made frequently.

0 Kudos
NIKOLAOS_P_
Beginner
745 Views

Ok will try those and report back my findings.

0 Kudos
Ariska_Hidayat
Beginner
745 Views

Maybe you reinstall cdm v1.4 then you restart the computer with the cable still plugged in to the computer realsense.

0 Kudos
Reply