Items with no label
3327 Discussions

I want to get the real x, y, and z coordinates of the object.

승노
Beginner
6,943 Views

I want to get the real x, y, and z coordinates of the object.

Currently, I am using the R2. A recent project is to find an object and get its actual coordinates. However, I am a beginner and it is difficult to accomplish the task by myself.

How can I find the object I want in the R200? And how do I get the Real coordinates of that object?

Now I got the pixel x, y, and depth coordinates from the point I clicked with the mouse in the color image.

0 Kudos
21 Replies
MartyG
Honored Contributor III
2,146 Views

In the 2016 R2 SDK, you can project 3D world coordinates to the depth coordinates with the instruction ProjectCameraToDepth.

https://software.intel.com/sites/landingpage/realsense/camera-sdk/v1.1/documentation/html/index.html... Intel® RealSense™ SDK 2016 R2 Documentation

The reverse of this process, projecting depth to the 3D world coordinates, is ProjectDepthtoCamera.

https://software.intel.com/sites/landingpage/realsense/camera-sdk/v1.1/documentation/html/index.html... Intel® RealSense™ SDK 2016 R2 Documentation

승노
Beginner
2,146 Views

Do you have any examples of how to use them? I tried, but I don't know how to approach both functions. I use C # .

MartyG
Honored Contributor III
2,146 Views

This page has a couple of C# examples for these functions.

https://csharp.hotexamples.com/examples/intel.rssdk/PXCMPoint3DF32/-/php-pxcmpoint3df32-class-exampl... PXCMPoint3DF32, intel.rssdk C# (CSharp) Code Examples - HotExamples

ProjectCameraToDepth

/**

@brief Map camera coordinates to depth coordinates for a few pixels.

@param[in] pos3d The array of world coordinates, in mm.

@param[out] pos_uv The array of depth coordinates, to be returned.

@return PXCM_STATUS_NO_ERROR Successful execution.

*/

public pxcmStatus ProjectCameraToDepth(PXCMPoint3DF32[] pos3d, PXCMPointF32[] pos_uv)

{

return PXCMProjection_ProjectCameraToDepth(instance, pos3d.Length, pos3d, pos_uv);

}

ProjectDepthToCamera

/**

@brief Map depth coordinates to world coordinates for a few pixels.

@param[in] pos_uvz The array of depth coordinates + depth value in the PXCMPoint3DF32 structure.

@param[out] pos3d The array of world coordinates, in mm, to be returned.

@return PXCM_STATUS_NO_ERROR Successful execution.

*/

public pxcmStatus ProjectDepthToCamera(PXCMPoint3DF32[] pos_uvz, PXCMPoint3DF32[] pos3d)

{

return PXCMProjection_ProjectDepthToCamera(instance, pos_uvz.Length, pos_uvz, pos3d);

}

승노
Beginner
2,146 Views

I'm sorry. In the ProjectDepthToCamera example, you need to use a parameter (pos_uvz, pos3d), and I have no idea how to use it. How should you declare and put the value in a parameter?

MartyG
Honored Contributor III
2,146 Views

No need to be sorry. I will have to refer this question to RealSense stream programming expert jb455 though, as he can provide a better answer on this subject than I can. I apologize for the wait in the meantime.

jb455
Valued Contributor II
2,146 Views

As you want to map the colour image with camera coordinates, you'll want to use https://software.intel.com/sites/landingpage/realsense/camera-sdk/v1.1/documentation/html/index.html... ProjectColorToCamera instead.

You'll first need to map the depth values to the colour image, using something like this:

Then you can use projectcolortocamera like so:

PXCMPoint3DF32[] cp = new PXCMPoint3DF32[cwidth * cheight];

PXCMPoint3DF32[] cvp = new PXCMPoint3DF32[cwidth * cheight];

PointF3D[] rtn = new PointF3D[cwidth * cheight];

for (int j = 0, k = 0; j <</span> cheight; j++)

{

for (int i = 0; i <</span> cwidth; i++, k++)

{

//Create 3D point with the colour (x,y) coordinates and corresponding z value

cp[k] = new PXCMPoint3DF32(i, j, mappedPixels[j * cwidth + i]);

}

}

projection.ProjectColorToCamera(cp, cvp);

Now cvp is filled with xyz coordinates for each point in the colour image. To get the coordinates for a specific point in the colour image, (i,j), it'll be cvp[j * cwidth + i]

승노
Beginner
2,146 Views

Thank you for your response. My project is currently in progress with OpenCV. However, there are still some questions left. First, My program is getting invalid depth values (i.e., the value of trash) from some pixels. How do you resolve this? And How do I see The depth value image (i.e., cp[k].z) After mapping the depth values to the colour image?

jb455
Valued Contributor II
2,146 Views

Invalid depth points will have z values of either 0 or -1 (I can't remember which), so just add `if(p.z>0)` in front of everything that needs a valid depth value.

The z values of the camera coordinates are the same as the depth image z values so it'll be cvp[k].z in my example. But if you really want to keep the depth data array around for access there's nothing stopping you!

승노
Beginner
1,800 Views

Is the cvp[] extracted here the same as the World XYZ coordinate system? I want to turn the camera 45 degrees, but how do we proceed with the coordinate system conversion?

승노
Beginner
2,146 Views

Thank you for your response.

But there's another problem. I have to track and find objects. So I want to use "EnableTracker ()" among the functions. However, I understand that " EnableTracker () " is not available in the R200. Is there another way to track objects?

MartyG
Honored Contributor III
2,146 Views

I researched your question carefully but was not able to find a way to track objects with the R200 unfortunately, either with the Windows SDK or with Librealsense.

idata
Community Manager
2,146 Views

Hello NSK,

 

 

For the R200 the object tracking is not supported, therefore Intel does not have any sample code on how to achive this feature. Since the R200 is an end of life product Intel will not further develop on this camera.

 

 

Best Regards,

 

Juan N.
승노
Beginner
2,145 Views

I have currently purchased the SR300. Can I get some examples of object tracking with SR 300? And how does object tracking work?

MartyG
Honored Contributor III
2,146 Views

I would recommend using the current RealSense SDK 2.0 for your project, as it is compatible with the SR300, works fully with OpenCV and can do object detection via OpenCV.

You can download RealSense SDK 2.0 from the link below:

https://github.com/IntelRealSense/librealsense GitHub - IntelRealSense/librealsense: Intel® RealSense™ SDK

And a sample program for OpenCV object detection with SDK 2.0 can be found at this link:

https://github.com/IntelRealSense/librealsense/tree/master/wrappers/opencv/dnn librealsense/wrappers/opencv/dnn at master · IntelRealSense/librealsense · GitHub

승노
Beginner
2,146 Views

First of all, thank you very much for your help. By the way, I am using C # , do you have a C # code similar to this one?

MartyG
Honored Contributor III
2,146 Views

There is not currently a C# object detection tutorial for RealSense SDK 2.0. The other one available aside from the OpenCV example is in the Python language.

https://github.com/IntelRealSense/librealsense/blob/jupyter/notebooks/distance_to_object.ipynb librealsense/distance_to_object.ipynb at jupyter · IntelRealSense/librealsense · GitHub

If you are willing to go back to the older '2016 R2' RealSense SDK, which is compatible with your R200 and SR300 cameras, there are sample programs in that SDK in the C# language with source code for object tracking and object recognition.

Object recognition (R200)

Object tracking (SR300)

If you do not already have RealSense SDK 2016 R2, it can be launched as a 1.8 gb download in your browser with the link below.

http://registrationcenter-download.intel.com/akdlm/irc_nas/vcp/9078/intel_rs_sdk_offline_package_10.... http://registrationcenter-download.intel.com/akdlm/irc_nas/vcp/9078/intel_rs_sdk_offline_package_10....

jb455
Valued Contributor II
2,146 Views

You could try using a third-party library like OpenCV. There's an example/tutorial for object tracking here: https://www.learnopencv.com/object-tracking-using-opencv-cpp-python/ Object Tracking using OpenCV (C++/Python) | Learn OpenCV, and you can use https://github.com/shimat/opencvsharp OpenCVSharp to get it working in dot net.

승노
Beginner
2,146 Views

In the example " Object Tracking ", for a 3D object, I need to read a slam or xml file, but how do I create this file?

MartyG
Honored Contributor III
2,146 Views

The .slam and .xml files can be created with a program called the Metaio Toolbox that is supplied with the '2016 R2' RealSense SDK.

https://software.intel.com/sites/landingpage/realsense/camera-sdk/v1.1/documentation/html/index.html... Intel® RealSense™ SDK 2016 R2 Documentation

The default installation location of the Metaio Toolbox software is:

C: > Program Files (x86) > Intel > RSSDK > contrib > Metaio > MetaioTrackerToolbox

Sahira_Intel
Moderator
2,146 Views

Hi NSK

We hope you were able to get your questions answered. If you have any further questions please let us know.

Regards,

Intel Customer Support

Reply