Items with no label
3335 Discussions

How to calculate pointcloud from raw depth frame data

JShan16
Beginner
2,402 Views

I'm trying to stream the camera output over a network to render it on a different machine. I want to render the pointcloud, but the result of calculating the pointcloud is much larger than just the depth frame data, so I want to stream the depth frame first and calculate the pointcloud on the other machine. I can stream the raw data no problem, but I don't know how to get it back into a format that the calculate() function will work with on the other end. Since frame.get_data() returns a const pointer, I can't just make a new frame and stick the data into it. I assume that since the frame instances maintain their own memory for the data, I can't just try to serialize them.

 

I'm really stuck on this. Trying to stream the pointcloud even at 1 fps would need almost a gigabit per second!

 

Thanks for your help.

0 Kudos
1 Solution
jb455
Valued Contributor II
1,336 Views

If you send the intrinsics over too, you can use rs2_deproject_pixel_to_point (https://github.com/IntelRealSense/librealsense/blob/master/include/librealsense2/rsutil.h#L49) on each pixel in the depth image to generate the point cloud. The intrinsics won't change once the stream has started so you only need to send them once.

View solution in original post

0 Kudos
3 Replies
MartyG
Honored Contributor III
1,336 Views

In January 2018, Intel did a demo at the Sundance festival where they captured point cloud data on a PC with the camera and sent the data to a separate PC for ​postprocessing. They likely did this over a network like you are trying to do. Some of the technical details are in the link below.

https://realsense.intel.com/intel-realsense-volumetric-capture/

I also recommend watching Intel's recent RealSense presentation on ​'Deep learning for VR / AR', which features use of 'tiny networking'.

 

https://realsense.intel.com/webinars/

0 Kudos
jb455
Valued Contributor II
1,337 Views

If you send the intrinsics over too, you can use rs2_deproject_pixel_to_point (https://github.com/IntelRealSense/librealsense/blob/master/include/librealsense2/rsutil.h#L49) on each pixel in the depth image to generate the point cloud. The intrinsics won't change once the stream has started so you only need to send them once.

0 Kudos
JShan16
Beginner
1,336 Views

Thanks, I think this is exactly what I needed!

0 Kudos
Reply