- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I am trying to copy the depth data to an opencv Mat image. Below are my processing steps:
1st, I get the depth image using sdk;
2nd, I get the depth image imageData using AcquireAccess;
3rd, I get the depth data pointer in planes[0] and transform it into ushort type;
4th, I copy values of every pixels to the correponding pixel of an opencv Mat image;
but in result,the opencv Mat image is different from the depth image, I tried opencv Mat image either in 8UC1 type or 16UC1 type, the result is always different.
result is like the picture above,the left one is depth image showed by UtilRender, the right one is opencv Mat image in 8UC1 type showed by imshow() .
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
There was a similar case a couple of weeks ago with a conversion to .mat that looked different to the original. I do not think there was a clear and definitive solution in that case, but you may see something that is helpful.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
According to the discussion I linked to above, somebody believed that RealSense's depth scan is 16 bit but when it is converted to .mat it becomes an 8 bit (8uc1) image. That may be why you are getting a black screen in 16uc1 mode.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi MartyG
Thank you for your reply. It is helpful.
Do you mean that, I get the correct depth value, and the difference between two images is because of the different way of display between UtilRender and imshow()?
In fact, I want to calculate the 3D distance between 2 points. To get the 3D coordinates of points, I need the depth value of points. But when I test calculating, there will always be an error about 20%. So I wondering if I get the correct depth value of points.
Do you know about this?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
My knowledge of RealSense to mat conversion is not deep enough to be able to comment on the difference between UtilRender and imshow(), unfortunately.
This discussion, with insights from RealSense stream programming expert jb455, may help you work out how to measure the distance between two points.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thank you very much!
I'll search for more information about calculating distance.
And welcome everybody to communicate with me about this problem.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
You're very welcome.
You can get the world coordinates of the depth using the ProjectCameraToDepth instruction.
https://software.intel.com/sites/landingpage/realsense/camera-sdk/v1.1/documentation/html/index.html?projectcameratodepth_pxcprojection.html Intel® RealSense™ SDK 2016 R2 Documentation
Then, as jb455 said in the discussion I linked to, you'll need to pick the points to measure between.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
That's exactly what I used. I pick 2 points to test each time. And get the world coordinates by projectdepthtocamera. Here is my code:
# include
# include
# include "pxcsensemanager.h"
# include "util_render.h" //SDK provided utility class used for rendering (packaged in libpxcutils.lib)
# include
# include
# include
# include
# include
# include
# include
# include
# include
# include
# include
# include
# include "pxcsession.h"
# include "pxccapture.h"
# include "util_render.h"
# include "util_cmdline.h"
# include "pxcprojection.h"
# include "pxcmetadata.h"
# include "util_cmdline.h"
using namespace std;
using namespace cv;
Point fp = Point(-1, -1);
Point sp = Point(-1, -1);
cv::Mat rgb;
cv::Mat dep;
void GetWorldCoordinates(Point p1, Point p2, ushort* depth_data, PXCProjection *projection) //get the world coordinates and distance
{
PXCPoint3DF32 depthdata, data3d1, data3d2;
int x1 = p1.x;
int y1 = p1.y;
depthdata.x = x1;
depthdata.y = y1;
depthdata.z = depth_data[y1 * 640 + x1];
projection->ProjectDepthToCamera(1, &depthdata, &data3d1);
int x2 = p2.x;
int y2 = p2.y;
depthdata.x = x2;
depthdata.y = y2;
depthdata.z = depth_data[y2 * 640 + x2];
projection->ProjectDepthToCamera(1, &depthdata, &data3d2);
double distance = sqrt((data3d1.x - data3d2.x)*(data3d1.x - data3d2.x) + (data3d1.y - data3d2.y)*(data3d1.y - data3d2.y) +
(data3d1.z - data3d2.z)*(data3d1.z - data3d2.z));
cout << p1.x << " " << p1.y << endl;
cout << p2.x << " " << p2.y << endl;
cout << data3d1.x << " " << data3d1.y << " " << data3d1.z << endl;
cout << data3d2.x << " " << data3d2.y << " " << data3d2.z << endl;
cout << distance << endl << endl;
}
void on_mouse(int event, int x, int y, int flags, void *ustc)
{
if (event == CV_EVENT_LBUTTONDOWN)
{
fp = Point(x, y);
}
else if (event == CV_EVENT_LBUTTONUP)
{
sp = Point(x, y);
}
}
int wmain(int argc, WCHAR* argv[])
{
cout << "Intel Realsense SDK Hacking using Opencv" << endl;
cout << "Intel Realsense Camera SDK Frame Capture in opencv Mat Variable -- by Deepak" << endl;
cout << "Compiled with OpenCV version " << CV_VERSION << endl << endl;
PXCSenseManager *psm = 0;
psm = PXCSenseManager::CreateInstance();
if (!psm) {
wprintf_s(L"Unable to create the PXCSenseManager\n");
return 1;
}
psm->EnableStream(PXCCapture::STREAM_TYPE_COLOR, 640, 480); //depth resolution
psm->EnableStream(PXCCapture::STREAM_TYPE_DEPTH, 640, 480); //depth resolution
psm->Init();
PXCCaptureManager *cm = psm->QueryCaptureManager();
PXCCapture::Device *device = cm->QueryDevice();
PXCProjection *projection = device->CreateProjection();
UtilRender color_render(L"Color Stream");
UtilRender depth_render(L"Depth Stream");
/////////// OPENCV
IplImage *image = 0;
CvSize gab_size;
gab_size.height = 480;
gab_size.width = 640;
image = cvCreateImage(gab_size, 8, 3);
PXCImage::ImageData data;
PXCImage::ImageData data_depth;
unsigned char *rgb_data;
/////
for (;;)
{
if (psm->AcquireFrame(true)
PXCCapture::Sample *sample = psm->QuerySample();
PXCImage *colorIm, *depthIm;
// retrieve the image or frame by type from the sample
colorIm = sample->color;
depthIm = sample->depth;
PXCImage *color_image = colorIm;
PXCImage *depth_image0 = depthIm;
PXCImage *depth_image;
depth_image = projection->CreateDepthImageMappedToColor(depth_image0, color_image);
color_image->AcquireAccess(PXCImage::ACCESS_READ_WRITE, PXCImage::PIXEL_FORMAT_RGB24, &data);
depth_image->AcquireAccess(PXCImage::ACCESS_READ_WRITE, &data_depth);
rgb_data = data.planes[0];
for (int y = 0; y<480; y++)
{
for (int x = 0; x<640; x++)
{
for (int k = 0; k<3; k++)
{
image->imageData[y * 640 * 3 + x * 3 + k] = rgb_data[y * 640 * 3 + x * 3 + k];
}
}
}
ushort* depth_data = (ushort*)data_depth.planes[0]; //
color_image->ReleaseAccess(&data);
depth_image->ReleaseAccess(&data_depth);
rgb = cv::Mat(image);
imshow("Color_cv2", rgb);
//////////////////////translate points from depth coordinates to world coordinates and calculate distance/////////////////////////
setMouseCallback("Color_cv2", on_mouse, 0);
GetWorldCoordinates(fp, sp, depth_data, projection);
if (cvWaitKey(10) == 54)
break;
//////
if (!color_render.RenderFrame(color_image)) break;
if (!depth_render.RenderFrame(depth_image)) break;
psm->ReleaseFrame();
}
cvReleaseImage(&image);
projection->Release();
psm->Release();
return 0;
}
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
So this brings us back to the very first entry in your message, I believe, where you were using the script above but were finding that there are differences between the images in the RealSense and OpenCV .mat versions of what the camera is seeing?
During further research, I found another user who had RealSense / .mat image difference, and offered some code that they said solved the problem in their particular case. Their solution is in the second comment down, highlighted as the 'correct answer'.

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page