Hello RealSense community!
I'm working in a lab, attempting to use the HandsViewer features of the RealSense SDK to evaluate the joint angles of both hands. We have both an F200 camera and a Creative Senz3D camera - we would buy another F200 but they appear to be out of stock. Our goal is to analyze how subjects use both hands to perform a series of experiments where the objects they are working with could obscure one of their hands, so we must use two cameras to have continuous view of both hands. Thus, we would like to use the functionality of the RealSense SDK on the Creative Senz3D camera. I was hoping somebody could point me to what this would require - the way I see it there are three possibilities:
1. The RealSense software can be used directly with the Creative Senz3D camera. (unlikely)
2. The RealSense software can be modified from the source code to be used with the Creative Senz3D camera.
3. The RealSense software relies on computations performed on the camera itself, so it is incompatible at a hardware level with the Creative Senz3D camera. The only way to achieve joint location/angle on the Creative Senz3D would be to write my own computer vision algorithms that work with the raw data from the Creative Senz3D camera.
Please help me figure out which of these is the case and some steps to get going if I can modify the source code. Thanks!
As I understand it, the RealSense SDK's code is quite different from the Senz 3D's SDK, which is why Senz apps had to be rewritten for RealSense.. Whilst it is possible, like with the Senz 3D, to create your own camera algorithms (and Intel support people being able to do so), it is more difficult to do this with RealSense because of its increased complexity.
The easiest route for you may be to use the test modes that are already built into RealSense, where you can load pre-made data into a RealSense application instead of live camera input and process that input as though you were actually using a real F200.
This forum link should be helpful:
Brilliant suggestion! Unfortunately, the RealSense SDK was looking for a .rssdk file, and when I gave it a recording from the Senz3D instead, it was unable to run the hand tracking algorithms on that video feed - my guess is that they rely on in-camera processing that just doesn't happen on the older devices. I believe that it would be possible to create my own algorithm to provide this feature for the Senz3D, but I'm afraid that would be a lot of sunk time for an inferior result (since the Senz3D doesn't provide the same hardware capabilities and I clearly don't have the expertise of the team of the team developing this), and even if I could do something at the same level, discrepancies in implementation might make the data too inconsistent for our project. Luckily, the SR300 is back in stock so we'll go with that. Thanks for the help!