- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
I am trying to get 3D (x, y, z) coordinates from the depth image using intel real sense. I couldn't find any clear, straight forward solution. There seems to be a function "projectDepthToCamera ()" in PXCMProjection class. But I couldn't get it work. Can some one please help me with this.
Thanks,
Push
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Push,
To get (x,y,z) coordinates of each point in the image I've modified the DepthToColorCoordinatesByFunction() function in projection.cs in DF_RawStreams.cs_vs2010-13.sln. The array dcords contains all the points; to get the depth for a specific (x,y) coordinate you'll need to use dcords[y * [width of image in pixels] + x].
Problem with this though is that x & y are the pixel coordinates of the point in the image, while z is the depth from the camera in mm. Getting them all in mm is the next problem if you want 'real world' coordinates.
The way I am *attempting* to do this at the moment is to use the device.QueryDepthFieldOfView() property to calculate the real-world size of the frame at depth z (simple bit of trigonometry), then multiplying that by the x/y coordinates over the width & height of the image (in pixels). This should, in theory, give you coordinates of the point in mm but in the testing I've been doing it hasn't been working too well, either due to the accuracy of the camera or some problems with my calculations. Let me know if you have any more luck!
James
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thank you James ! But I am not quite sure if applying simple trigonometric ratio gives the correct x and y coordinates because converting pixel coordinates into real world coordinates depends on the camera matrix. May be that is the reason your coordinates are accurate.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello,
When you get the depth image, each pixel value represents distance in a non standard unit, which is the disparity of that pixel. This is because of the way the camera works internally. You don't need to understand that, but if you want, you can search for "computer stereo vision". The RealSense camera is a particular case of that, commonly referred to as "active computer stereo vision" because it uses structured light (the infrared projected pattern) to make the process easier as well as the ability to work at night!.
Anyway, if you have a depth image, you can get the corresponding 3D points in mm by using the queryvertices function of the RealSense API:
Make sure you allocate the memory for the vertices first though!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
If you have points in the depth image already you can use projection.QueryVertices to get the array of vertices (mapped to the depth image), then vertices[y * (depth image width) + x] will give you the real world coordinates of your (x.y) depth point.
Also, in case anyone from the future ends up here, ignore my previous post in this thread. With the benefit of hindsight, that was a pretty dumb way to go about things! Check out the member functions of PXCMProjection. Though the samples and docs don't do a very good job of showing how to use them, they are good for doing this sort of thing.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi James, Thank you for your reply , i tried the approach you have highlighted except for some reason the width and height of my depth image is 480px X 360px and my color image is 640 x 480 hence i am unable to map the vertices from the co-ordinates properly , i am new to realsense and i dont know why is there an inconsistency in the sample.depth and sample.color am i missing something?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Because the depth and colour cameras are two separate lenses pointing to slightly different places, even if the depth and colour images were the same resolution, the same (x,y) pixel coordinate looks at a different physical place in both images. Therefore, if you have a colour point and want to know the corresponding depth point (if it exists: not all colour points are picked up in the depth image and vice-versa), you need to map between the two images.
There are a couple of ways of doing this. If you're only interested in "a few" points (ie, significantly less than the whole image), PXCMProjection.MapColorToDepth is probably the easiest to get started with. You just pass it the depth image (sample.depth), an array of colour points you're interested in (can just be a single point in the array) and an empty array the same length as the colour array to be filled with depth points. You can then loop through the resultant depth (i,j) points to find the depth in mm as per my previous post. Note, if a colour point you send it has no depth point it'll return (-1,-1) so check the coordinates are greater than zero before attempting to get the vertex or you'll get an out of bounds error.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Yeah this would work well if i am looking for the depth , but like i mentioned in my first post i am getting depth off the blob points what i need are the x and y in mm , so i can calculate the distance in mm between any two points , currently my blob data structure returns the x and y in pixel co-ordinates and the z as the distance in mm from the camera . I need to convert to projection of (x,y) to to x in mm from the camera's orgin and y in mm from the camera's orgin , so once i correct them, I can use the distance formula to measure the distance between two points in mm.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Sorry yeah, the last bit in my post #8 will give you the vertex, which is (x,y,z) coordinates in mm from the camera's origin, not just the depth. The vertices array is mapped to the depth image, which is why you have to map from colour to depth first.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Oh you are saying that the color is not directly mapped the the vertices array , instead we need to mapcolor to depth first and then take the resulting values to map the vertices ? ok let me try this tonight
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Yes, that's it. So the 15th item in the vertex array corresponds to the 15th pixel in the depth image, but it'll be some other pixel in the colour image, which we find by mapping.

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page