Software Archive
Read-only legacy content
17061 Discussions

Depth Stream to image

lee_j_2
Beginner
712 Views

I am trying to convert the depth stream to a Mat image for processing. However I am getting several banding artifacts from the stream. I think it is a precision problem but I am not sure how to do it.

 

Top Left: Mat type shown using opencv

Top Right: Stream shown using SDK's UtilRender

Bottom Frame: Real scene

help.png



This is the code I am using.

 

PXCSenseManager *psm = 0;
    psm = PXCSenseManager::CreateInstance();
    if (!psm){
        std::cout << "manager not created";
    }
    std::cout << "manager created";
    psm->EnableStream(PXCCapture::STREAM_TYPE_COLOR,640,480);
    psm->EnableStream(PXCCapture::STREAM_TYPE_DEPTH,320,240);
    psm->Init();

    UtilRender color_render(L"Color Stream");
    UtilRender depth_render(L"Depth Stream");

    PXCImage::ImageData data;
    PXCImage::ImageData data_depth;

    unsigned char *rgb_data;
    float *depth_data;

    IplImage *image = 0;
    CvSize gab_size;
    gab_size.height = 480;
    gab_size.width = 640;
    image = cvCreateImage(gab_size, 8, 3);
    IplImage *depth = 0;
    CvSize gab_size_depth;
    gab_size_depth.height = 240;
    gab_size_depth.width = 320;
    depth = cvCreateImage(gab_size_depth, 8, 1);
    for (;;) {
        if (psm->AcquireFrame(true)<PXC_STATUS_NO_ERROR) break;

        PXCCapture::Sample *sample = psm->QuerySample();

        PXCImage *color_image = sample->color;
        PXCImage *depth_image = sample->depth;

        //OpenCv processing- retrieve image in MAT format
        int height = 480;
        int width = 640;

        color_image->AcquireAccess(PXCImage::ACCESS_READ_WRITE, PXCImage::PIXEL_FORMAT_RGB24, &data);
        depth_image->AcquireAccess(PXCImage::ACCESS_READ_WRITE, &data_depth);
        rgb_data = data.planes[0];

        rgb_data = data.planes[0];
        for (int y = 0; y<480; y++)
        {
            for (int x = 0; x<640; x++)
            {
                for (int k = 0; k<3; k++)
                {
                    image->imageData[y * 640 * 3 + x * 3 + k] = rgb_data[y * 640 * 3 + x * 3 + k];
                }
            }
        }

        short* depth_data = (short*)data_depth.planes[0]; //
        for (int y = 0; y<240; y++)
        {
            for (int x = 0; x<320; x++)
            {
                depth->imageData[y * 320 + x] = depth_data[y * 320 + x];
            }
        }
 

       
        color_image->ReleaseAccess(&data);
        depth_image->ReleaseAccess(&data_depth);
  
        cv::Mat rgb(image);
        imshow("color", rgb);
        cv::Mat dep(depth);
        imshow("depth_cv2", dep);
        if (cvWaitKey(10) >= 0)
            break;

        if (!color_render.RenderFrame(color_image)) break;
        if (!depth_render.RenderFrame(depth_image)) break;

        psm->ReleaseFrame();
    }

0 Kudos
5 Replies
Henning_J_
New Contributor I
712 Views

A couple of observations:

  • you are getting your depth data as floats and then casting to shorts. That should be ok for the values you are expecting. But you could just get the data as unsigned shorts instead of floats by using PIXEL_FORMAT_DEPTH in the AquireAccess() call.
  • but your actual problem happens when you try to put 16bit shorts into an 8bit OpenCV mat. It looks like it's just using the bottom 8bit of your 16bit number, leading to the "banding" you experience.So instead, you should create a 16bit unsigned cv mat.
0 Kudos
samontab
Valued Contributor II
712 Views

Your OpenCV depth image is only 8 bits. That's why you have a problem.

Have a look at this video, it shows how to get raw data and convert it into OpenCV:

https://www.youtube.com/watch?v=wIkIdjN6Oyw

You need to convert the OpenCV image into an 8 bit image in order to display it correctly, but the source should be 16 bits.

0 Kudos
lee_j_2
Beginner
712 Views

Okay thanks above.

I solved the issue.

0 Kudos
lucas_m_1
Beginner
712 Views

Hi Sebastian,

I've tried to use the code you explain on this video:

https://www.youtube.com/watch?v=wIkIdjN6Oyw

However, it seems to work just for a close capture, what I mean is that for objects further than around 50 centimeters from the camera the depth data is the same, as the objects were all at the same distance. I'm working on a project where I need to get depth data from a desk, so the camera should be place considerably far from the desk to get all the surface.

I added these lines to your code:

 - UtilRender depth_render(L"Depth Stream"); // right afer the psm->Init();

 - if (!depth_render.RenderFrame(depth_image)) break; // before releasing the frame (depth_image is a pointer to sample->depth)

With these two additional lines I get the raw depth stream shown, and the raw stream has the exactly depth data independently or the distance.

I tried to figure out why the Mat image is not getting the right information but didn't find an answer. Can you give me a hand?

0 Kudos
samontab
Valued Contributor II
712 Views

First, these cameras will allow you to capture depth of things relatively close to them.

F200's official depth sensing range is 0.2–1.2 metres. Other models such as the SR300 has a 60% improved range compared to the F200, and the R200 can detect a bit further away since it also uses stereo, and not only active sensing.

So, you can't really measure things farther away than say, 1 or 2 metres at maximum. And the farther away you are, the worse the data is. This is true for any similar technology, including Kinect v2 for example, which at most can get up to 4m, with much, much more powerful IR emitters.

When you say "exactly the same distance", do you mean visually they look similar in the rendered window?, or you print the values and they are all identical?. If you are just visually comparing them, they will look the same, but are in reality different values. Because of how depth is represented, you see big changes from black to grey when the camera is close to an object, but little or no visual variation in white when it is far away. So, make sure you print the values and not rely on visual inspection. Have a look here where I re-encode the depth values linearly, you can actually see differences in depth far away once you do that:

http://www.samontab.com/web/2015/11/interfacing-intel-realsense-3d-camera-with-google-camera-lens-blur/

Now, it may be possible that the SDK returns a constant depth value for things that are very far away. You should just ignore this in general, as the depth data is not good at all. Basically, treat 0s and 255s (or whatever edge cases you have) as special cases and ignore these depth values in the normal application. This is normal, and part of any 3D sensing device. 3D data from a real world sensor in general is noisy, and is not continuous, i.e. there are 'holes' in the data.

Also, you can control how far away the camera can see by tweaking its active parameters. Some settings have a trade-off, for example, it will take longer to detect objects that are far away, so they can't be moving, etc. Have a look here for more info about this:

https://software.intel.com/en-us/forums/realsense/topic/537872

And finally, make sure you are requesting and processing the correct format for your depth image. Depth can come in different formats, such as 8bit or 10bit integers, 32bit floating point, etc.

0 Kudos
Reply