- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello, i'm using a SR300 via official SDK from LabVIEW.
I'm able to correcly acquire the depth map.
Now I'm trying to figure out how to convert the depth map to a real world coordinate point cloud. I need the best possible result so i want to consider calibration, distortion of the lens etc.
Can you suggest some tips to proceed?
Alessandro.
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The QueryVertices instruction will map depth to real world coordinates.
https://software.intel.com/sites/landingpage/realsense/camera-sdk/v1.1/documentation/html/queryvertices_pxcprojection.html QueryVertices
If you want to map depth to color too, this post is helpful too.
The BlasterX camera referenced in the discussion's title is a rebranded SR300, so the code listed should be fine for your SR300.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi MartyG, thanks for fast reply.
I tried QueryVertices but is very slow for converting the entire depth map, from the QueryVertices documentation i understand that the function should be used only for converting "few points"
Also I don't need the color stream in my project.
Alessandro
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
There is also ProjectDepthToCamera
https://software.intel.com/sites/landingpage/realsense/camera-sdk/v1.1/documentation/html/projectdepthtocamera_pxcprojection.html ProjectDepthToCamera
I apologize that I cannot provide in-depth help on this question. jb455 is our resident expert on stream programming. I just play the warm-up guy on such questions until he's online..
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks again MartyG,
i saw the https://software.intel.com/sites/landingpage/realsense/camera-sdk/v1.1/documentation/html/projectdepthtocamera_pxcprojection.html ProjectDepthToCamera but i can't understand if it uses the camera specific calibration/lens compensation or not
Alessandro
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The SR300 has a fixed factory-set calibration which we can't change (voice your displeasure at that fact here: ). So it does use some calibration, but that calibration might not necessarily be correct! That's just the colour-depth image calibration through I suppose. I haven't noticed any problems with the depth calibration (excluding the colour mapping) but I don't need it to be uber-precise (to 1mm is usually enough precision for me). I'd suggest trying it yourself, but if you find the built-in calibration isn't good enough for you, you may have to calibrate yourself then use that calibration to calculate your own real-world coordinates using the pinhole model.
https://software.intel.com/sites/landingpage/realsense/camera-sdk/v1.1/documentation/html/index.html?pxccalibration.html This page in the docs shows you how to obtain the calibration that is used internally if you want to inspect it.
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page