- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
After aligning and preprocessing my framesets (grabbed from D415 or D435 cameras) I'm creating pointclouds with rs2::pointcloud
map_to() on the color frame and calculate on the depth frame.
This works fine, unless I use a decimation filter, it now appears as if the color image hasn't been decimated and all the points in my pointcloud are the wrong color: it appears as if the pixels from the top-left of my video frame have been used to color the points.
The examples for pointclouds all seem to use synthetic color, generated form the depth data after all the preprocessing, so they don't help. And realsense-viewer does seem to do this correctly but I'm getting completely lost trying to understand how it does this.
Can anyone tell me what I'm doing wrong?
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
If you are applying the decimation after alignment, Intel recommend that post-processing filters are applied before aligning depth to color. This is, they say, how they handle point clouds in the RealSense Viewer.

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page