Software Archive
Read-only legacy content
Announcements
FPGA community forums and blogs on community.intel.com are migrating to the new Altera Community and are read-only. For urgent support needs during this transition, please visit the FPGA Design Resources page or contact an Altera Authorized Distributor.

Detect shoulders using F200 realsense camera

manish_k_2
Beginner
1,964 Views

HI,

Is there any way to detect shoulders (or find shoulder co-ordinates) using F200 realsense camera? Even detecting any points close to shoulders would be also fine.  Currently i can see it's only for face and hands.

Regards,

Manish

0 Kudos
1 Solution
MartyG
Honored Contributor III
1,964 Views

Generally, it is easy to find out what height both shoulders are at, because when you bend down the height value of the chin point will reduce in real-time.  And since the shoulder calculation would be partly deriving its coordinates from the chin landmark, when the chin height goes down, the shoulder height will go down too.

For example, if the chin is at a height coordinate of '1200' and the shoulder is about 2 lower then the shoulder height would be derived from chin height - 2.  So if the shoulder height started at 1200 - 2 = 1198, then when the chin height reduces - say, to 1196 - then the sum will automatically update the shoulder value (1196 - 2 - 1194 new shoulder height).

Anatomically, one shoulder will not be lower than the other when bending over unless you are consciously moving one shoulder to have a different height.  

If you really need to detect the shoulder heights separately, your best option may be to use the left eyebrow height as the main value for the left shoulder and the right eyebrow height for the right shoulder, and give each a height addition / subtraction value that is appropriate for the greater height distance of those landmarks from the shoulder height.

The reason I suggest this approach is that whenever humans try to exert control over one particular side of the body, there is usually a part of the face that is also unconsciously moved.  For instance, if you are trying to hear something with a particular ear then the eyes will tend to move slightly in the direction of that ear.  And in the case of the shoulder, the eyebrow on the side of the lifted shoulder may also unconsciously lift slightly in response to the conscious effort being applied to the shoulder.

Edit: when you said about bending the shoulders, I wondered if you meant bending the body sideward rather than up-down.  In that case, the shoulders certainly would be at different heights when leaning left or right.  In that situation, using the eyebrows as your landmark would definitely be the optimum height tracking solution, because one eyebrow would be noticeably lower than the height of the other brow, especially if tracking the outer edge of the brow rather than its center.

View solution in original post

0 Kudos
8 Replies
MartyG
Honored Contributor III
1,964 Views

Although there is not a tracking point for shoulders, you can simulate the entire arm, shoulder to hand.  I have done this with a full-body avatar in my own project.

I wrote a guide to building an arm a while back.  The guide is very old and my RealSense arm control techniques have advanced a lot since then, but it'll give you a basic introduction to the principles of simulating a shoulder and arm.

http://test.sambiglyon.org/sites/default/files/Arm-Construction-Guide.pdf

0 Kudos
manish_k_2
Beginner
1,964 Views

Hi Marty,

Thanks for the suggestion but my requirement is little different. I need to determine the position of both my shoulders and while doing this the hands will not be visible (may be just upper arm and head would be visible). Moreover i need to track these position in real time i.e. if there position changes i should be able to determine their new positions.

 

Regards,

Manish

0 Kudos
MartyG
Honored Contributor III
1,964 Views

I guess you could track the nearest visible face landmark's position, and read the value of that position and perform an addition or subtraction sum on those coordinates to work out the approximate position of the left and right shoulders.

For example, for the left shoulder, you could say that it equals the position of the Chin landmark, minus a certain value of X (the horizontal left-direction distance of the shoulder from the chin position), minus a certain value of Y (to indicate that the shoulder coordinates are at a lower height than the chin).

For the right shoulder, you would *add* a certain X value instead of subtract (to indicate that the shoulder coordinates are to the right of the chin), and again subtract a Y value to indicate the shoulder's lower height.

It wouldn't be perfect tracking, since the Chin coordinates would vary if the head was lifted or dropped, or turned from left to right.  But it may give you a rough coordinate position for the left and tight shoulders.

0 Kudos
manish_k_2
Beginner
1,964 Views

Hi Marty,

Thanks for such a quick reply. Your solution is quite right. I have one more query here, is there any way to find that my shoulders are not at same height (may be when i bend, one will go lower and other higher)? Moreover if someway we can find which one is at higher and which one is lower, it will be great.

Thanks again for all your help.

Regards,

Manish

0 Kudos
MartyG
Honored Contributor III
1,965 Views

Generally, it is easy to find out what height both shoulders are at, because when you bend down the height value of the chin point will reduce in real-time.  And since the shoulder calculation would be partly deriving its coordinates from the chin landmark, when the chin height goes down, the shoulder height will go down too.

For example, if the chin is at a height coordinate of '1200' and the shoulder is about 2 lower then the shoulder height would be derived from chin height - 2.  So if the shoulder height started at 1200 - 2 = 1198, then when the chin height reduces - say, to 1196 - then the sum will automatically update the shoulder value (1196 - 2 - 1194 new shoulder height).

Anatomically, one shoulder will not be lower than the other when bending over unless you are consciously moving one shoulder to have a different height.  

If you really need to detect the shoulder heights separately, your best option may be to use the left eyebrow height as the main value for the left shoulder and the right eyebrow height for the right shoulder, and give each a height addition / subtraction value that is appropriate for the greater height distance of those landmarks from the shoulder height.

The reason I suggest this approach is that whenever humans try to exert control over one particular side of the body, there is usually a part of the face that is also unconsciously moved.  For instance, if you are trying to hear something with a particular ear then the eyes will tend to move slightly in the direction of that ear.  And in the case of the shoulder, the eyebrow on the side of the lifted shoulder may also unconsciously lift slightly in response to the conscious effort being applied to the shoulder.

Edit: when you said about bending the shoulders, I wondered if you meant bending the body sideward rather than up-down.  In that case, the shoulders certainly would be at different heights when leaning left or right.  In that situation, using the eyebrows as your landmark would definitely be the optimum height tracking solution, because one eyebrow would be noticeably lower than the height of the other brow, especially if tracking the outer edge of the brow rather than its center.

0 Kudos
manish_k_2
Beginner
1,964 Views

Hi Marty,

Thanks a lot for reply. Yup i meant it sidewards. I agree with your point that its normal human behaviour that one part is doing something other parts like face,  will follow it, but i have little strange situation, when user is sitting on the chair sometimes they slip in to awkward position where there their hand is still straight but their shoulders come into uneven position (may be one of the arm would be resting on chair's arm rest). Also it may be possible that user intentionally raises one of his shoulder keeping his head straight (i know its not normal but my software demands to detect that). That's why i was trying to avoid face parts to calculate my shoulder position.

I was thinking of of taking right or left extreme points from blob but that is also not showing very encouraging result(not sure if i am doing something wrong).

0 Kudos
MartyG
Honored Contributor III
1,964 Views

I have never been able to get blob tracking to work at all, so I am not surprised that you would have difficulty with it.

Intel's recommendation in their RealSense Best Practices document is that whilst you should aim to make a control interface behave like the use of the real-life body, you should not be aiming to make the controls *too* real (i.e simulating over achieving a perfect simulation of the real body), because doing so can make your application not very enjoyable to use.  

I would tend to agree with this approach.  As long as my avatar's body moves like a human's does, I do not worry if the movements of the RL body are not replicated 1:1 in a perfect mirror on-screen.  1:0.8 is fine with me, because even if it is less than perfect, I can still perform the same actions as the RL body with relative ease, even if the movements do not match the RL anatomical structure perfectly.

My point is that if you can get a good approximation of the user's shoulder height, then it's not a terrible thing if a shoulder's individual height compared to its opposite neighbor is not precisely matching to the RL position.

 

0 Kudos
manish_k_2
Beginner
1,964 Views

I completely agree with your point Marty. I think i should go with assumptions you mentioned.

Thanks for all your help.

0 Kudos
Reply