- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I'm trying to stream the camera output over a network to render it on a different machine. I want to render the pointcloud, but the result of calculating the pointcloud is much larger than just the depth frame data, so I want to stream the depth frame first and calculate the pointcloud on the other machine. I can stream the raw data no problem, but I don't know how to get it back into a format that the calculate() function will work with on the other end. Since frame.get_data() returns a const pointer, I can't just make a new frame and stick the data into it. I assume that since the frame instances maintain their own memory for the data, I can't just try to serialize them.
I'm really stuck on this. Trying to stream the pointcloud even at 1 fps would need almost a gigabit per second!
Thanks for your help.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
If you send the intrinsics over too, you can use rs2_deproject_pixel_to_point (https://github.com/IntelRealSense/librealsense/blob/master/include/librealsense2/rsutil.h#L49) on each pixel in the depth image to generate the point cloud. The intrinsics won't change once the stream has started so you only need to send them once.
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
In January 2018, Intel did a demo at the Sundance festival where they captured point cloud data on a PC with the camera and sent the data to a separate PC for postprocessing. They likely did this over a network like you are trying to do. Some of the technical details are in the link below.
https://realsense.intel.com/intel-realsense-volumetric-capture/
I also recommend watching Intel's recent RealSense presentation on 'Deep learning for VR / AR', which features use of 'tiny networking'.
https://realsense.intel.com/webinars/
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
If you send the intrinsics over too, you can use rs2_deproject_pixel_to_point (https://github.com/IntelRealSense/librealsense/blob/master/include/librealsense2/rsutil.h#L49) on each pixel in the depth image to generate the point cloud. The intrinsics won't change once the stream has started so you only need to send them once.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page