Items with no label
3338 Обсуждение

Combining multiple depth streams into one

MMend10
Начинающий
5 852Просмотр.

Hello,

I know it is possible to view depth streams from multiple cameras. However, these streams are separate.

Right now I need to combine streams from multiple cameras (more specifically, 2 D415's) to obtain combined point cloud data. Problem being, these streams obviously don't match, due to differences in the camera's position and rotation from one another.

I was wondering whether there is a native and prepared solution to somehow have these streams combined into a single matched one, or if an additional, customized, programmatic solution is required to achieve this.

Thank you in advance for whatever answers/attention.

0 баллов
6 Ответы
MartyG
Почетный участник III
3 624Просмотр.

The easiest approach may be to modify the 'Align' sample of the RealSense SDK 2.0, which aligns streams. The sample's code automatically looks for streams from all attached cameras.

https://github.com/IntelRealSense/librealsense/tree/master/examples/align librealsense/examples/align at master · IntelRealSense/librealsense · GitHub

MMend10
Начинающий
3 624Просмотр.

This may be the best option I've found so far, thank you.

I'll be looking into this solution as quickly as possible, and report back soon™

Since this wasn't included in the examples that were provided with the SDK 2.0 VS project, I never found it, let alone consider it.

MartyG
Почетный участник III
3 624Просмотр.

You are very welcome. You can find the current full list of SDK 2.0 samples at the link below. Further samples are sometimes added when new builds of the SDK 2.0 are released. Good luck!

https://github.com/IntelRealSense/librealsense/tree/master/examples librealsense/examples at master · IntelRealSense/librealsense · GitHub

EMuel
Новичок
3 624Просмотр.

Hello,

I'm working on the same issue. Did you find a way to combine/align pointclouds from two D415?

I spent weeks to (somehow) understand the sdk examples. Had to start learning c++ meanwhile. Now I can project pointclouds from two D415 on the same openGL window. One on top of the other.

 

So far so good. But how to go further and merge the pointclouds?

The rs-align example doesn't really help.

Cheers, Edgar

Anders_G_Intel
Сотрудник
3 624Просмотр.

The first and easiest way to do this is to create individual point clouds and then translate and rotate the using affine transforms.

https://en.wikipedia.org/wiki/Affine_transformation Affine transformation - Wikipedia

Basically it involves transforming you (X, Y, Z) points to a new rotated set. Many programming languages have this as a predefined function.

Then just rotate/translate the one camera view point until it look aligned. Append the two arrays of point clouds, and you have your final point cloud.

This should work for the most part.

If you want even better you should look to packages or techniques that will try to recalibrate the two cameras, and bundle adjust giving you best alignment.

Maybe someone else can guide you on this if you need this level of alignment.

EMuel
Новичок
3 624Просмотр.

Thank you for the advice.

 

 

I need the pointclouds to match as best as possible.

 

I got the transformation working by manipulating the translation and rotation of one cloud. However this is manual alignment and not precise.

RecFusion provides what I need. It also saves the transformation values to file.

Using this information might be a quick and dirty solution for now. But since I want to record the streams and don't want a static 3D object RecFusion is not my choice.

 

I also prefer to program it myself.

So recalibration of the cameras is the next step.

 

RecFusion needs a calibration chart.

I will try to

- find the edges of the chart's rectangle in each devices frame

- compare the distancies that are given by the depth value of the points

- I guess I will run into allot of Math

Does any one know a library that does the "bundle calibration" of given devices?

Ответить