Software Archive
Read-only legacy content
17061 Discussions

Depth Image

CLi37
Beginner
1,097 Views

Is this a depth image?

0 Kudos
9 Replies
samontab
Valued Contributor II
1,097 Views

It should not look like that. It should be an 8, or 16 bit grayscale image.

Maybe you have the wrong drivers installed?

Maybe the representation of the data is wrong?

How did you get that image?

0 Kudos
Marios_B_
Beginner
1,097 Views

Actually this is a depth image if you open the camera's depth sensor from OpenCV VideoCapture or from a software like AMCap.

However to get the correct depth image details you may want to use realsense sdk and stream depth from there.

0 Kudos
CLi37
Beginner
1,097 Views

yes I capture this image from AMCap. This image data is not depth data? I do not know what decoder it is. 

Anyway to get depth data from this image? i.e. from a simple computing.

Any DirectShow codec to convert depth data to an image? I am think of converting depth space to [0.0,1.0] space to RGB space.

 

0 Kudos
samontab
Valued Contributor II
1,097 Views

The camera creates a device specific image that encodes depth information per pixel (disparity). That's what you are getting. You need the SDK to convert It to real world units (X, Y, Z, in mm for example). In theory you could do it, but you need to calibrate the camera, get the internal values yourself, and then calculate the depth. The SDK does all that for you.

It looks weird because it is a 16 bit grayscale image being displayed as an 8 bit image. This can be seen in all those contours, which are basically overflown 8 bit numbers.

0 Kudos
CLi37
Beginner
1,097 Views

Hi samontab,

I am wonder the depth camera capture x,y,z coordinate or just a distance to a special point?

Is there a formula to convert each pixel value of image to the XYZ coordinate?

I may just need depth data per pixel which can represented as a grey image (in 16-bit / 32-bit). That is raw data.

I can not use SDK to convert it or it can be hard. 

 

0 Kudos
samontab
Valued Contributor II
1,097 Views

The camera has a projector that projects IR patterns. The IR camera reads the image with those patterns projected into the scene.

Points that are closer to the camera will appear shifted to the side more than points that are farther away. That distance shift is the raw data created by the camera, in 16 bits. This number is device specific, it is not a standard measuring unit like mm for example. You can use that number plus other internal details of the camera to generate an actual distance to the point from the image plane. Also, you can calculate X, Y, Z, with the optical center of the camera being the origin.

As I said before, all of this in included in the SDK. If you cannot use the SDK, you will need to make all the measurements and calibration yourself, and since this is a closed product, you may not have all the things you need. Or maybe you can figure out all those parameters... The point is that it is not a simple process if you are not using the SDK.

0 Kudos
CLi37
Beginner
1,097 Views

Is it possible to do all these computing and packed the depth data into an image by Intel's pre-processing?

What we need is just depth data in standard format, so we can do some post-processing in real-time or offline from a stored stream data.

Intel can design its own codecs to encode/decode, all we need is the hardware independent depth data.

I do not understand that Intel can stream  useless depth images why it can not stream useful depth images?

 

0 Kudos
samontab
Valued Contributor II
1,097 Views

In theory, yes, it is possible.

There are however a few reasons why it would be impractical, difficult or even impossible to do so.

As you already know, you need this raw data plus other numbers to actually get a standard measurement. The storage of those numbers can be easily done in an SDK, but it may be hard or even impossible to store those numbers on the actual device itself.

Also, there are a few options for the output. You may want the depth data projected onto the color camera, or maybe you want the X, Y, Z coordinates instead of just the depth of a pixel, and so on. Which one do you stream?. It can get tricky to cover all of those cases in just the camera itself.

There are other issues as well, but you get the idea... it is not as straight forward as you may initially have thought.

0 Kudos
CLi37
Beginner
1,098 Views

"As you already know, you need this raw data plus other numbers to actually get a standard measurement. The storage of those numbers can be easily done in an SDK, but it may be hard or even impossible to store those numbers on the actual device itself."

Think of depth image stream as a motion picture stream, the info data does not need to store in device but in the stream data format. 

"Also, there are a few options for the output. You may want the depth data projected onto the color camera, or maybe you want the X, Y, Z coordinates instead of just the depth of a pixel, and so on. Which one do you stream?."

Just need the depth data projected onto the color camera. Combine with color camera data, the ideal case is to get depth data of each pixel of color image. The depth data is the distance from each pixel of color image to an original point (view point?). The XYZ coordinates can be computed.

 

0 Kudos
Reply