Software Archive
Read-only legacy content
Announcements
FPGA community forums and blogs on community.intel.com are migrating to the new Altera Community and are read-only. For urgent support needs during this transition, please visit the FPGA Design Resources page or contact an Altera Authorized Distributor.
17060 Discussions

Depth stream To 3D position

Jian_l_2
Beginner
502 Views

I wanto use the every Pixel of Depth stream to get 3D position which include Dist x,y,z and Degrees X,Y,Z. Dose the SDK include this algorithm?

0 Kudos
2 Replies
Pubudu-Silva_Intel
502 Views

Yes, you can access depth data of all pixels. Please refer to the background segmentation tutorial, and specially the sample code at

https://software.intel.com/sites/default/files/managed/66/9c/Background_Segmentation.pdf

 

0 Kudos
Jonathan_M_Intel
Employee
502 Views

Hi, I apologize, I think the link Pudubu shared is not helpful.  To transform the depth image values to 3D coordinates, check the documentation for the PXCProjection interface:

https://software.intel.com/sites/landingpage/realsense/camera-sdk/2014gold/documentation/html/index.html?pxcprojection.html

 

Try this (pseudocode):

 

PXCProjection * projection = sensemanager->QueryCaptureManager()->QueryDevice()->CreateProjection();

PXCPoint3DF32 depthCoords[width * height * 3];

while(loop) {

sensemanager->AcquireFrame();

PXCCapture::Sample * sample = sensemanger->QuerySample();

PXCImage * depthMap = sample->depth;

projection->QueryVertices(depthMap,(PXCPoint3DF32 *) depthCoords);

//Do something with your coordinates//

sensemanager->ReleaseFrame();

}

Your 3D points will be in the depthCoords array, packed as [x,y,z] in millimeters, with one set of 3 floats for each of the original depth map pixels, in "raster order", and using the coordinate system described here:

https://software.intel.com/sites/landingpage/realsense/camera-sdk/2014gold/documentation/html/index.html?manuals_coordinate_systems.html

Note:  many of the resulting coordinates will be zeros, but that only means that those pixels were either saturated or too far for detection.  There will be usable depth points deeper into the array.

 

Hope this helps.   

 

0 Kudos
Reply