- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello,
I'm running a RealSense D435 camera from python code, currently using the callback mechanism through sensor.start
My code captures depth frames, displays most of them and a saves some of them, currently as independent 16 bit png files.
I would like to be able to open the frames after the fact and convert them to point clouds. Most simple way seems to be to use rs.rs2_deproject_pixel_to_point, as pointcloud requires a frame as input, and at this processing point I'm currently not over concerned with speed at this point.
For this however I need to save and load the stream intrinsics, and was wondering if there is a direct way to do that? pickle doesn't seem to work on it and the offline part of the module seems to have been removed. Saving a bag file also doesn't seem like an option as it does not allow inspection like PNGs and does not seem to allow me to choose frames to save after the fact
Thanks
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I hope the link below will be of use to you.
https://github.com/IntelRealSense/librealsense/pull/1118 Python API additions by zivsha · Pull Request # 1118 · IntelRealSense/librealsense · GitHub
In regard to getting frames from a bag, Dorodnic the RealSense SDK Manager has previously said: "Our built-in recorder will collect everything you need, and it is fairly well optimized, at least when running in Release. You can then extract individual frames / models as a post-processing step".
An introductory tutorial to using post-processing filters in Python with a bag file can be found here:
https://github.com/IntelRealSense/librealsense/blob/jupyter/notebooks/depth_filters.ipynb librealsense/depth_filters.ipynb at jupyter · IntelRealSense/librealsense · GitHub
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Unfortunately the answer has nothing to do with the question
1. It requires the intrinsics to be available, which means that I need to store them some how, either in a bag or reload later
2. A bag is not relevant since I store only a small subset of frames in memory in response to some external event (hardware triggers) and then save them after the fact, the file saver stores all of them, it also does not allow me to view them as images without storing a duplicate. I want to store just the depth data and intrinsics and process off line
The only solution I found so far is to manually copy the intrinsics one by one into a dictionary, save this dictionary, and them manually reconstruct the intrinsics object, as the intrinsics object does not allow being pickled (it has no dictionary, presumably because it's a Cython construct)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
My apologies. You said that "Saving a bag file also doesn't seem like an option", suggesting that it was something you would like to do if it were possible.
The RealSense GitHub site, where the RealSense developers and engineers are located, will be your best option for guidance on this highly technical programming question. You can do so by visiting the link below and clocking the 'New Issue' button to post a question there.
https://github.com/IntelRealSense/librealsense/issues Issues · IntelRealSense/librealsense · GitHub
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
"The only solution I found so far is to manually copy the intrinsics one by one into a dictionary, save this dictionary, and them manually reconstruct the intrinsics object, as the intrinsics object does not allow being pickled (it has no dictionary, presumably because it's a Cython construct)"
If anyone who is interested -
First print out all values -
depth_intrinsic = aligned_depth_frame.profile.as_video_stream_profile().intrinsics
print(depth_intrinsic.width)
print(depth_intrinsic.height)
print(depth_intrinsic.ppx)
print(depth_intrinsic.ppy)
print(depth_intrinsic.fx)
print(depth_intrinsic.fy)
print(depth_intrinsic.model)
print(depth_intrinsic.coeffs)
Then use those values to construct intrinsic object as follows -
import pyrealsense2 as rs
#These are some example values for my D435 camera (Use your parameter values)
depth_intrinsic = rs.pyrealsense2.intrinsics()
depth_intrinsic.width = 424
depth_intrinsic.height = 240
depth_intrinsic.ppx = 213.47621154785156
depth_intrinsic.ppy = 121.29695892333984
depth_intrinsic.fx = 306.0126953125
depth_intrinsic.fy = 306.1602783203125
depth_intrinsic.model = rs.pyrealsense2.distortion.inverse_brown_conrady
depth_intrinsic.coeffs = [0.0, 0.0, 0.0, 0.0, 0.0]
example python3 code
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks a lot, this helped me greatly !

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page