In the IQSampleTool application there are depth maps calculated, such that the depth (Z coordinate) is displayed for the mouse cursor on the image.
This location is given in e.X,e.Y image pixels, according to the mouseEvent. Is there a way to convert these e.X,e.Y image pixels into X,Y coordinates in the camera coordinate system?
Yes. you can use QueryVertices if you're pointing at the depth image, or ProjectColourToCamera if it's the colour image, to map from the depth/colour coordinate systems to the camera coordinate system. Once you have your array of mapped coordinates, you need to convert (e.x,e.y) to the pixel in the original image (assuming there is some scaling on the UI: if, for example, you have a 640*480 image and it's displayed as a 640*480 in the UI you can skip this step but if it's 640*480 and represented on the screen by a 320*240 thumbnail you'll need to multiply by the scaling factor). Then it's just a case of getting the corresponding element from the mapped array: array[x + y * imageWidth].