Software Archive
Read-only legacy content
17061 Discussions

Streaming Depth into Matlab

Andrew_H_2
Beginner
1,315 Views

I am using the image acquisition toolbox in Matlab to try to stream data directly out of the camera and feel like I am extremely close.  Using the UVC compliant webcam support package I can initiate a connection and receive images.

>> webcamlist
ans = 
    'Intel(R) RealSense(TM) 3D Camera Virtual Driver'
    'Intel(R) RealSense(TM) 3D Camera (Front F200) RGB'
    'Intel(R) RealSense(TM) 3D Camera (Front F200) Depth'
>> cam = webcam(3);
>> cam
cam = 
  webcam with properties:
                    Name: 'Intel(R) RealSense(TM) 3D Camera (Front F200) Depth'
              Resolution: '640x480'
    AvailableResolutions: {'640x480'  '640x240'}

However the images seem to be either not bitshifted correctly or something is being corrupted on the way.  I am trying to recreate the same RGB24 image that is produced when the Intel Clip Editor saves an image list.  The closest I have gotten can be reproduced by doing the following.

>> testO = 1;
>> while testO == 1
>> test(:,:,:) = snapshot(cam);
>> imagesc(bitshift(test(:,:,2)-test(:,:,1),2));
>> end

If anyone could shed some light on the way in which depth images are saved out from the clip editor or imaged when a stream is initiated that would be great.  My current method for getting data into Matlab requires a lot of computer interaction I'd like to remove, so this would be ideal if I could get it to work.  

Thanks,

Andrew 

0 Kudos
7 Replies
samontab
Valued Contributor II
1,315 Views

If you just do a raw stream from the camera you will not get a correct depth stream. The easiest way would be to use the RealSense SDK to get the correct depth value and export it into matlab.

0 Kudos
Andrew_H_2
Beginner
1,315 Views

But why?  There are several channels I can pick from (INVZ, INRI, etc.)...some have the exact same streams that most of the sample files display.  Surely there is a way to translate this into the same depth data that the RSSDK output produces.

As I said in the original post, I already know how to (and have already done it and calibrated with a linear stage) take an image, save it out using the Intel Clip Editor and then read the resultant data into Matlab.   The output is an RGB24 image where the first frame shows depth in mm and the second a method of unwrapping the 8bit phase.

0 Kudos
samontab
Valued Contributor II
1,315 Views

Well, it would be great to have an independent way of accessing the camera depth information (XYZ in mm). I am not sure if this information is already encoded in the camera stream alone though.

My guess is that maybe you will be able to get the raw depth stream from the camera (disparity), but without the calibration numbers that information is not super helpful. I reckon the SDK has those calibration numbers stored in software, and if that is the case you will still need to use the RealSense SDK to get meaningful numbers from the depth image.

0 Kudos
Andrew_H_2
Beginner
1,315 Views

I was hoping to find that bit and attempt to make a mex file etc. with it.  However, looking through the source seems to leave me pretty empty handed with how the raw data is converted.  Granted I am not a literal software engineer so maybe I'm missing something.  Hence the post.  If anyone else has something to add please do.

0 Kudos
Mogomotsi_W_
Beginner
1,315 Views

Hello guys;

I need to know if the Intel RealSense is able to recognise facial expression and track hands trajectory at the same time. I would like to use it for Sign language recognition, in which the expression is made by facial expression and moving hands which I would like to trace the point from which they moved to the point at which they stopped (trajectory). please if anyone has a matlab sample code for the mentioned proposal please share it with me on ghoms.wets@gmail.com. 

0 Kudos
MartyG
Honored Contributor III
1,315 Views

Hi Mogomotsi,

RealSense can track face and hand gestures at the same time.  But if you lift your hands too high then they block the face and prevent the camera from reading it properly.  Also, you may find that the range of gestures provided by RealSense is not enough to make a full set of sign language gestures and you may have to consider programming your own camera algorithms (which is possible but difficult).

I suggested to another person who wanted to use RealSense for hand-signing that they may find it much easier if they use a full-body avatar and set up trigger points around the body for the hands to pass through (e.g the trigger of 'index and middle finger apart' and 'hand in front of chest' means that a particular sign gesture has been made.

If you want a practical example of this, you can download my RealSense-powered full-body avatar.  It has all of the movement of real human arms and can open and close the fingers.  It'd give you a strong idea of how you might approach a sign language project using such an avatar.

https://software.intel.com/en-us/forums/topic/559451

0 Kudos
Ferry_S_
Beginner
1,315 Views

What I found is that the image is split up into lines. The first line is the first half of a horizontal line of the image. The second line is the second half of a horizontal line of the image. And so on.

So basically you can reconstruct the image like this:

% Get sizes
X1 = 1:2:size(test,1);
X2 = 2:2:size(test,1);
Y = 1:size(test,2);
Z = 1:3;

% Display the image.
half1= test(X1,Y,Z);
half2= test(X2,Y,Z);
TEST= [half1,half2];

Although, the resulting image is not a pure depth image. You will see that only the green colors have values and the intensity goes from zero to high a couple of times when moving an object away and towards the sensor. I assume the SDK will need to process the images retrieved to actually reconstruct a depth intensity map.

0 Kudos
Reply