I've been using the Intel RealSense SDK bindings in Unity for a little while now and I'm a bit at a loss when trying to translate or project the 2D coordinates of a pixel I selected on the IR map image to the depth map and then into 3D world coordinates.
From what I understand from the SDK documentation, the IR map it obtained by the same camera that generates the depth map which means that I should be able to use the ProjectDepthToCamera method from the Projection class to obtain real world coordinates from my selected point, however, I do not have any depth information for my point and the method signature clearly states that 3D point from the depth map is expected as an input to obtain the 3D point in world coordinates.
Is there any obvious or non obvious way that I may have missed to to project a 2D point from the IR image onto the depth map and then use the previously mentioned ProjectDepthToCamera method to obtain the real world coordinates of my point?
Thanks in advance for any help!
Can you enable the depth stream too to get the depth values for the IR points? I've only used the ProjectColorToCamera method, and before using that I have to create an array with colour (x,y) values and the z value obtained from CreateDepthImageMappedToColor. Perhaps it's similar with IR?