链接已复制
I just wanted to say first for those who may want to help that the BlasterXSenz3D is a new camera based on the SR300 (similar to how the Razer Stargazer is an SR300-compatible) and is not the Creative Senz3D from 2013.
In answer to your question Olajac72, you could use the QueryVertices SDK instruction:
https://software.intel.com/sites/landingpage/realsense/camera-sdk/v1.1/documentation/html/queryvertices_pxcprojection.html QueryVertices
Other instructions include CreateColorImageMappedToDepth:
https://software.intel.com/sites/landingpage/realsense/camera-sdk/v1.1/documentation/html/createcolorimagemappedtodepth_pxcprojection.html CreateColorImageMappedToDepth
And CreateDepthImageMappedToColor
https://software.intel.com/sites/landingpage/realsense/camera-sdk/v1.1/documentation/html/createdepthimagemappedtocolor_pxcprojection.html CreateDepthImageMappedToColor
You should note though that CreateColorImageMappedToDepth and CreateDepthImageMappedToColor have a memory leak issue, meaning that the performance of a program using those instructions degrades over time until the application stops working. So QueryVertices is the safest instruction to use.
Having said that, some developers have managed to negate the memory leak by ensuring that they do a Release instruction at the end of their script.
There is also MapColorToDepth.
https://software.intel.com/sites/landingpage/realsense/camera-sdk/v1.1/documentation/html/index.html?mapcolortodepth_pxcprojection.html Intel® RealSense™ SDK 2016 R2 Documentation
QueryVertices gets you the world coordinates, but they'll be aligned to the depth image. To map them to the colour image you can use https://software.intel.com/sites/landingpage/realsense/camera-sdk/v1.1/documentation/html/index.html?queryinvuvmap_pxcprojection.html InvUVMap, which will look something like this (untested):
var dwidth = depth.info.width;
var dheight = depth.info.height;
var invuvmap = new PXCMPointF32[color.info.width * color.info.height];
projection.QueryInvUVMap(depth, invuvmap);
var vertices = new PXCMPoint3DF32[dwidth * dheight];
projection.QueryVertices(depth, vertices);
var mappedVertices = new PXCMPoint3DF32[color.info.width * color.info.height];
for (int i = 0; i < invuvmap.Length; i++)
{
int u = (int)(invuvmap[i].x * dwidth);
int v = (int)(invuvmap[i].y * dheight);
if (u >= 0 && v >= 0 && u + v * dwidth < vertices.Length)
{
mappedVertices[i] = vertices[u + v * dwidth];
}
<span style="font-weight: inhe...
Thanks a lot again
What i mean is, is there a way to calculate the distance between two points in metric value (mm) from XY coordinates in a 2D color image?
Maybe this is a mathematic question and not related to the camera.
I can think of a mathematic way to do it but its not that easy.
Yes: once you have the vertices/world coordinates/camera coordinates (same thing, different names in different places - they are XYZ coordinates given in millimetres from a central point) aligned to the colour image, you can do simple Pythagoras on any two points (which have depth data) to get the distance between them. If you get my code above working, the world coordinates for a point (x,y) in the colour image will be mappedVertices[x + y * color.info.width].
Hi,
The syntax will be a bit different depending on which language you're using, but this is what it'll look like in C# :
PXCMSenseManager pSenseManager = PXCMSenseManager.CreateInstance();
var device = pSenseManager.captureManager.QueryDevice();
var projection = device.CreateProjection();
If you can't get the PXCMSenseManger interface etc, make sure you have libpxcclr.cs referenced and a copy of libpxccpp2c.dll in your debug folder.
Good luck!
James
It works. Thank you so much! There are two more questions:
(1) Is it necessary to add "projection.Dispose()" or "device.Dispose()" to avoid the memory leak issue?
(2) The Description of QueryVertices :
"The QueryVertices function calculates the vertices from the depth image. The vertices contain the world coordinates in mm."
Is it mean that "mappedVertices" is equal to the point cloud data? I try function "plot3" to display in matlab and get this:
It seems that it is different from the depth image in samples?
Yes, best to dispose everything when you're done with it.
The vertices will be the point cloud data, yes. You may need to change the scales on your axes and/or clip points which are too close or too far away to get something which looks more realistic! Also you may prefer to take the negative of the z values so the point cloud isn't 'upside-down'. As you have it there, the camera viewpoint will be looking up from z=0 rather than looking down from z=5000.
Sorry, I am not sure whether R200 is different from SR300? I have tried many times but the vertices displaying in matlab do not look like the depth image. I can not even recognize any thing from the displaying vertices.
I do not know why there are some [0 0 0] points in the vertice data. Can you explain the specific meaning of vertice points or how it works please? Thanks!
If a pixel doesn't have valid depth data, the vertex for that pixel will be (0,0,0) - so you can filter out all of those as they don't give you any information. In fact, with the R200, you could probably safely ignore anything with a z value of less than 250 as the minimum range for the camera is around 300 depending on lighting conditions so anything less than that will likely be invalid.
Your image does look a little strange though. What is the camera looking at? Try just pointing it at a flat wall at about 300-500mm to see if your output is a bit cleaner without all those outlying points.
