Items with no label
3335 Discussions

Shuttlecock Tracking Project

RTang10
Beginner
1,695 Views

I am a new to this area with intermediate coding skills.

 

I am currently looking to purchase the D435 for a university project which involves tracking the 3D position of a badminton shuttlecock.

 

Would someone be able to confirm this will be possible using this camera? (Assuming FPS and resolution are adequate)

 

If it is possible where would I start?

I am currently tending towards using opencv.

 

Thanks :)

0 Kudos
1 Solution
MartyG
Honored Contributor III
1,009 Views

Background subtraction seems like a reasonable approach if you are scanning from the viewpoint of the impact wall and want to make sure that the depth sensing does not capture the players in the background.

 

Regarding bounding boxes, the discussion in the link below may be useful to you.

 

https://github.com/IntelRealSense/librealsense/issues/2016

View solution in original post

0 Kudos
5 Replies
MartyG
Honored Contributor III
1,009 Views

The D435 model can track motion when attached to moving vehicles on highways, so it should be able to cope with the speed of a badminton shuttlecock. It will also probably help that the shuttlecock has a very complex surface with many edges, as that should make it easier for the camera to lock on to its details than if you were tracking a spherical object with a totally smooth surface that has low detail.

 

The D435 has a default maximum depth sensing range of 10 meters, so could not track the position with depth sensing until the shuttlecock comes within 10 meters range of the camera. If you are using an average indoor badminton court as the scene for your capture, the full length of a court is around 13.4 m under badminton rules.

 

It is also worth noting that because of a factor called 'RMS error', the accuracy of the depth reading drifts noticably once an object is more than 3 meters from the camera, as accuracy diminishes over distance.

 

If longer range was needed, it may be possible to train the camera to recognize the shuttlecock with 'DNN' (Deep Neural Network) object detection in OpenCV. The RealSense SDK has a couple of example programs for doing this that use RGB images for their object detection analysis.

 

Here is an example:

 

https://github.com/twMr7/rscvdnn

 

If the camera was in front of the players then picking up the shuttlecock might be quite straightforward, as it would be the only object moving in the camera's view. It would be more complicated if the camera were behind the players. the players could be detected and depth-measured by the camera, and they could also obscure the camera's view of the shuttlecock. So you might have to put the camera on some kind of mount or elevated tripod behind the players so it can see over their heads.

 

Another approach that might solve this would be to mount the camera at the impact wall so that it tracks the shuttlecock as it comes in for impact and rebounds back towards the player..

RTang10
Beginner
1,009 Views

Thank you , you have been extremely helpful!

 

Yes the camera will be well within 10 meters so that would be great.

 

Would using the depth map given let me interpret the height of the shuttlecock accurately?

That is my main aim.

 

Thank you again

0 Kudos
MartyG
Honored Contributor III
1,009 Views

You are very welcome. :)

 

You could get real-world XYZ coordinates by using a 3D depth scanning mode called a Point Cloud. It is a bit more complex to implement but should give you the complete capture of the scene and what is happening in it that you need.

 

You can read a tutorial for point cloud generation with the camera's SDK software here:

 

https://github.com/IntelRealSense/librealsense/tree/master/examples/pointcloud

 

The SDK also has a pre-made program called the RealSense Viewer that you can practice with to learn about point clouds without doing any programming.

0 Kudos
RTang10
Beginner
1,009 Views

Hello I am back with more questions,

 

Would someone be able to comment on the method I think is a way to do this using video and depth streams in parallel:

  1. Background subtraction
  2. Find shuttle in the RGB video frame
  3. Use detection box in frame to narrow down an area to look at in point cloud
  4. Identify shuttle in point cloud, extract XYZ coordinates

 

Thanks for your time

0 Kudos
MartyG
Honored Contributor III
1,010 Views

Background subtraction seems like a reasonable approach if you are scanning from the viewpoint of the impact wall and want to make sure that the depth sensing does not capture the players in the background.

 

Regarding bounding boxes, the discussion in the link below may be useful to you.

 

https://github.com/IntelRealSense/librealsense/issues/2016

0 Kudos
Reply