- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
Whats the easiest way to get a depth image (and a color image too) to real world metric values (cm/mm)? Were using a BlasterXSenz3D camera.
コピーされたリンク
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
I just wanted to say first for those who may want to help that the BlasterXSenz3D is a new camera based on the SR300 (similar to how the Razer Stargazer is an SR300-compatible) and is not the Creative Senz3D from 2013.
In answer to your question Olajac72, you could use the QueryVertices SDK instruction:
https://software.intel.com/sites/landingpage/realsense/camera-sdk/v1.1/documentation/html/queryvertices_pxcprojection.html QueryVertices
Other instructions include CreateColorImageMappedToDepth:
https://software.intel.com/sites/landingpage/realsense/camera-sdk/v1.1/documentation/html/createcolorimagemappedtodepth_pxcprojection.html CreateColorImageMappedToDepth
And CreateDepthImageMappedToColor
https://software.intel.com/sites/landingpage/realsense/camera-sdk/v1.1/documentation/html/createdepthimagemappedtocolor_pxcprojection.html CreateDepthImageMappedToColor
You should note though that CreateColorImageMappedToDepth and CreateDepthImageMappedToColor have a memory leak issue, meaning that the performance of a program using those instructions degrades over time until the application stops working. So QueryVertices is the safest instruction to use.
Having said that, some developers have managed to negate the memory leak by ensuring that they do a Release instruction at the end of their script.
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
Thanks a lot. I look into QueryVertices.
Do you also know the case for a color image to real World coordinates?
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
There is also MapColorToDepth.
https://software.intel.com/sites/landingpage/realsense/camera-sdk/v1.1/documentation/html/index.html?mapcolortodepth_pxcprojection.html Intel® RealSense™ SDK 2016 R2 Documentation
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
QueryVertices gets you the world coordinates, but they'll be aligned to the depth image. To map them to the colour image you can use https://software.intel.com/sites/landingpage/realsense/camera-sdk/v1.1/documentation/html/index.html?queryinvuvmap_pxcprojection.html InvUVMap, which will look something like this (untested):
var dwidth = depth.info.width;
var dheight = depth.info.height;
var invuvmap = new PXCMPointF32[color.info.width * color.info.height];
projection.QueryInvUVMap(depth, invuvmap);
var vertices = new PXCMPoint3DF32[dwidth * dheight];
projection.QueryVertices(depth, vertices);
var mappedVertices = new PXCMPoint3DF32[color.info.width * color.info.height];
for (int i = 0; i < invuvmap.Length; i++)
{
int u = (int)(invuvmap[i].x * dwidth);
int v = (int)(invuvmap[i].y * dheight);
if (u >= 0 && v >= 0 && u + v * dwidth < vertices.Length)
{
mappedVertices[i] = vertices[u + v * dwidth];
}
<span style="font-weight: inhe...
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
Thanks a lot again
What i mean is, is there a way to calculate the distance between two points in metric value (mm) from XY coordinates in a 2D color image?
Maybe this is a mathematic question and not related to the camera.
I can think of a mathematic way to do it but its not that easy.
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
Yes: once you have the vertices/world coordinates/camera coordinates (same thing, different names in different places - they are XYZ coordinates given in millimetres from a central point) aligned to the colour image, you can do simple Pythagoras on any two points (which have depth data) to get the distance between them. If you get my code above working, the world coordinates for a point (x,y) in the colour image will be mappedVertices[x + y * color.info.width].
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
Correct me on this one.
mappedVertices[x * y * color.info.width].
Should that be mappedVertices[x + y * color.info.width]? yes or no?
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
Wonderful - thanks for letting us know. Please come back to the forum any time if you need further help. Good luck!
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
Hello, I am a beginner developer using R200. I can not find the variable "projection" in the sample source code. It seems that "PXCMCaptureManager.QueryDevice" is needed, but I can not find Device either.
Can you show me how to initialize please?
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
Hi,
The syntax will be a bit different depending on which language you're using, but this is what it'll look like in C# :
PXCMSenseManager pSenseManager = PXCMSenseManager.CreateInstance();
var device = pSenseManager.captureManager.QueryDevice();
var projection = device.CreateProjection();
If you can't get the PXCMSenseManger interface etc, make sure you have libpxcclr.cs referenced and a copy of libpxccpp2c.dll in your debug folder.
Good luck!
James
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
It works. Thank you so much! There are two more questions:
(1) Is it necessary to add "projection.Dispose()" or "device.Dispose()" to avoid the memory leak issue?
(2) The Description of QueryVertices :
"The QueryVertices function calculates the vertices from the depth image. The vertices contain the world coordinates in mm."
Is it mean that "mappedVertices" is equal to the point cloud data? I try function "plot3" to display in matlab and get this:
It seems that it is different from the depth image in samples?
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
Yes, best to dispose everything when you're done with it.
The vertices will be the point cloud data, yes. You may need to change the scales on your axes and/or clip points which are too close or too far away to get something which looks more realistic! Also you may prefer to take the negative of the z values so the point cloud isn't 'upside-down'. As you have it there, the camera viewpoint will be looking up from z=0 rather than looking down from z=5000.
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
Sorry, I am not sure whether R200 is different from SR300? I have tried many times but the vertices displaying in matlab do not look like the depth image. I can not even recognize any thing from the displaying vertices.
I do not know why there are some [0 0 0] points in the vertice data. Can you explain the specific meaning of vertice points or how it works please? Thanks!
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
If a pixel doesn't have valid depth data, the vertex for that pixel will be (0,0,0) - so you can filter out all of those as they don't give you any information. In fact, with the R200, you could probably safely ignore anything with a z value of less than 250 as the minimum range for the camera is around 300 depending on lighting conditions so anything less than that will likely be invalid.
Your image does look a little strange though. What is the camera looking at? Try just pointing it at a flat wall at about 300-500mm to see if your output is a bit cleaner without all those outlying points.
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
Thank you so much!
The previous problem is that there are too many noise points, Thus I deleted the points of which distance is larger than 2500mm.
Finally I got the point cloud data and I can see myself in the image now. The blue part is my shape and other part is the wall.
