I have a Realsense D455.
I am writing a script for depth data collection on raspberry pi.
When I search how to do it, I often see the method of saving the data to **.bag, but this method causes the screen to stop every few seconds for a second or so.
I think writing data is too heavy for raspberry pi.
Also, it is quite large size, so I decided to use OpenCV's VideoWriter to save the depth information.
The following simple code is used to convert to RGB.
depth_image = np.zeros((size_h, size_w, 3), dtype=np.uint8)
frames = pipeline.wait_for_frames()
depth_frame = frames.get_depth_frame()
depth_image_u16 = np.asanyarray(depth_frame.get_data())
depth_image[:,:,0] = (depth_image_u16 / 256)
depth_image[:,:,1] = (depth_image_u16 % 256)
I want to know that I can restore it to the `depth_frame` object from OpenCV Image(numpy array) saved as above.
If I can restore `depth_frame` from OpenCV Image, I want to use get_ditance() method to get distance of any position.
And I also want to convert to point cloud from depth_frame with `pyrealsense2.pointcloud().calculate(depth_frame)`.
So, I would like to know if it is possible to convert a depth_frame object from the saved depth information after OpenCV Image, or if there is any information I am missing.
Thanks for reaching out to us. This forum primarily supports queries related to Intel distribution for python. Please submit RealSense queries here: https://support.intelrealsense.com/hc/en-us/requests/new .We will not be monitoring this thread further.