Items with no label
3335 Discussions

Converting from RGB depth frame back to depth value

PTozo
Beginner
6,730 Views

I'm saving the depth buffer frames to RGB using code like this, adapted from your sample code:

 

if ( auto df = depth_frame.as() )

{

unsigned char * raw_rgb = (unsigned char*)df.get_data();

So this saves a depth buffer -- which is 16-bit, AFAIK -- as RGB.

How do I then convert the RGB back to a depth value correctly (ideally, a float or something like that)? I'm doing some special post-processing on my end and I'm not currently seeing the code in the RealSense SDK that converts the 16-bit depth frame to RGB.

Obviously I could call depth_frame::get_distance(), but I am trying to call this in a separate tool inside Unreal engine, which I've had to luck integrating with RealSense so far (though I'm eagerly awaiting the UE4 integration for RealSense). And depth_frame::get_distance() calls rs2_depth_frame_get_distance(), which seems to be a hidden library call.

In other words, can you explain the depth->RGB conversion to me, and how I can reverse-engineer that to get depth from RGB in an app that doesn't (and currently can't) use the RealSense libraries?

(Alternatively, I could save off the entire depth frame by calling depth_frame::get_distance() on every single pixel of the depth frame, but I'd *really* prefer not to have to do that ...)

0 Kudos
14 Replies
idata
Employee
4,257 Views

Hello Taliesin,

 

 

I will be investigating further your question with the RealSense engineering team,

 

 

In order for me to have a better understanding of your project, can you please tell me what you are trying to achieve with your code?

 

 

Are you using a SDK sample as base for your coding? If so, which sample code are you using?

 

 

I will be waiting for your reply in order to continue to investigate your issue.

 

 

Best Regards,

 

Juan N.
0 Kudos
PTozo
Beginner
4,257 Views

Hi Juan,

Thanks for replying. I have a depth frame like this:

I'm trying to figure out the distance of each pixel from the camera based on this.

In other words, I'd like to be able to take any pixel in this image, and be able to write a function that converts the color back to a distance value --

float GetDistanceForColor( uint8_t red, uint8_t green, uint8_t blue )

 

{

// returns the distance

}

... such that I could go, "OK, this particular shade of blue is 0.57 meters from the camera ... this shade of red is 1.3 m ..." etc.

0 Kudos
jb455
Valued Contributor II
4,257 Views

Do you actually need the RGB bitmap for anything? If you only need the depth values it would be much easier to save the raw depth buffer (not using the colourizer) as bytes then just read that back into an array of floats when you need it.

0 Kudos
PTozo
Beginner
4,257 Views

Heya jb455 --

The reason I prefer RGB is that it compresses quickly and efficiently. I'm saving this data into a custom video stream where total file size is important, and we need FAST compression. I don't currently have a good way to do that with 16-bit float depth data that's nearly as efficient as some of the existing image compression algos.

And in theory, it should be lossless -- we're going from a 16-bit depth value to 24 bits of R, G, and B, so I'd have to assume no information is lost in the depth->RGB conversion that RealSense does.

So again, I compress the RGB using standard, efficient image-compression algos.

Then I can decompress at runtime, and I *should* be able to somehow convert the RGB at each pixel of the image back to a depth value ...

PT

0 Kudos
PTozo
Beginner
4,257 Views

OK, as far as I can tell, saving the raw depth info doesn't work either ...?

I've added some code that spits out the depth data every frame, writing a rather massive 450k file (at 640x360) every frame:

if ( auto df_raw = depth_frame.as() )

{

unsigned char * pDepthBuf = (unsigned char*)df_raw.get_data();

auto pFile = fopen( raw_depth.str().c_str(), "wb" );

if ( pFile )

{

fwrite( pDepthBuf, 2, kDepthFrameWidth * kDepthFrameHeight, pFile );

fclose( pFile );

}

}

Unfortunately, when I try to load this and represent it as a greyscale image in my separate editor app, I get this:

... and this SHOULD look like a depth map corresponding to this test image:

So ... if I'm to use the raw depth data for this, clearly I'm missing something about how the depth data is represented. I'd assumed it was one float per pixel, with each row saved linearly, from top down, English-text-style?

0 Kudos
MartyG
Honored Contributor III
4,257 Views

This reminds me of an old case where the rows were being mirrored to create multiple copies.

0 Kudos
PTozo
Beginner
4,257 Views

Heya MartyG -- thanks, but unfortunately, this didn't appear to be the problem. Attempting that reordering just made the depth image even worse.

0 Kudos
MartyG
Honored Contributor III
4,257 Views

Have you seen this discussion of RGBD compression methods as a source of alternative methods if your current approach does not work out?

0 Kudos
PTozo
Beginner
4,257 Views

MartyG: Yeah, that could help with compressing the depth data at some point down the road, but it doesn't address my current problem, which is actually accessing the raw depth data correctly and efficiently.

I was able to get a version that returns what appear to be the CORRECT depth values using the following:

if ( auto df_raw = depth_frame.as() )

{

float depth_vals[ kDepthFrameWidth * kDepthFrameHeight ];

for ( int iy = 0; iy < kDepthFrameHeight; ++iy )

{

for ( int ix = 0; ix < kDepthFrameWidth; ++ix )

{

float dist = df_raw.get_distance( ix, iy );

depth_vals[ ix + ( iy * kDepthFrameWidth ) ] = dist;

}

}

auto pFile = fopen( raw_depth.str().c_str(), "wb" );

if ( pFile )

{

fwrite( depth_vals, sizeof(float), kDepthFrameWidth * kDepthFrameHeight, pFile );

fclose( pFile );

}

}

 

Unfortunately, having to call depth_frame::get_distance() on each pixel in the depth frame slows it down immensely -- from 30 fps (in Release configuration) to an unacceptable 20 fps.

So, to summarize:

I can see 2 possible approaches:

1. Save the raw 16-bit depth buffer and then get the depth from that at runtime. This is the approach I'd prefer. Looking in the code, I see the comment "16-bit linear depth values. The depth is meters is equal to depth scale * pixel value." -- but I'm not sure how to interpret this. Depth scale means what, exactly? And is this a 16-bit unsigned int, or some other format?values. The depth is meters is equal to depth scale * pixel value.

RS2_FORMAT_Z16 , /**< 16-bit linear depth values. The depth is meters is equal to depth scale * pixel value. /

2. Save a colorized version of the depth buffer (24-bit) and then get the depth from THAT at runtime. Again, though, approach # 1 is preferable -- can anybody help a brotha out?

0 Kudos
jb455
Valued Contributor II
4,257 Views

Apologies, the raw data buffer is actually a short array, not float, so each value is 2 bytes not 4, which is possibly how you got the misaligned image.

You need to multiply each short value by the depth scale of the camera [from device.get_depth_scale()] which will give you the float depth in metres. The depth scale can be changed so you'd want to make a note of it when the image is captured as it the setting on the camera may be different when you come to reconstruct it.

Yeah, get_depth() isn't very efficient. It does warn against using it too much in the docs somewhere. I'm using C# and reading from the depth buffer (as well as doing alignment and manipulating the colour image) each frame with an acceptable FPS so I'd imagine it'd be even faster in C++!

Going back to your original question, I think the colours on the colourized depth image are relative to the current max and min depth currently visible so there may not be a direct mapping between colour and depth that works in all cases. I may be wrong though. My reason for believing this is that if you open up the stereo module in the viewer app, the scale along the right side of the image isn't static.

0 Kudos
PTozo
Beginner
4,257 Views

Thanks so much, jb455!

Are they signed or unsigned shorts? I'd assume unsigned, but I can find out easily enough. I mean, why would you ever have a negative distance.

I'll try also saving off the depth each frame ...

0 Kudos
PTozo
Beginner
4,257 Views

Hmmm. Interestingly, they appear to be signed, since treating them as unsigned doesn't work.

0 Kudos
PTozo
Beginner
4,257 Views

Yep, that's it -- the invalid ones are the ones <= 0.

So that's good, as I'm planning a custom depth map cleanup algorithm that will be able to benefit from that info.

0 Kudos
FDono
Beginner
4,257 Views

Dear Taliesin,

could you please give us the code to save the depth image efficiently and the code to recovery to later processing, thanks.

Regards,

Felipe D.

0 Kudos
Reply