Items with no label
3335 Discussions

Point Cloud Accuracy

vhanded
Novice
1,831 Views

I purchased 4 x D410 modules, HW sync, with the purpose of volumetric capture. After months of experimenting, I had tried several ways to align the point clouds together.

 

  1. Vicalib
  2. Point Cloud registration using box/object, extract the camera pose, and apply the pose to each point cloud.

 

Both methods are almost impossible to align properly. This makes me wonder, is the camera pose error, or point cloud accuracy error. Do different realsense camera has the ability to create accurate result?

 

Question 2:

D415 has RGB camera, do this extra feature makes the point cloud shape more accurate?

 

Thanks.

0 Kudos
1 Solution
MartyG
Honored Contributor III
1,435 Views

D415 has higher accuracy than D435. The D410 is D415-type technology (models 400, 410 and 415 are rolling shutter, and models 420 and 430 are D435-like global shutter). The global-shutter models are best suited to tracking fast motion though, as the slower rolling shutter can cause artifacts such as smearing on the image when observing an object that is moving quickly.

 

Typically, Intel uses D435 when demonstrating RealSense applications where there are going to be movement. This includes volumetric capture, or mounting to a mobile robot.

 

Intel did a volumetric capture demo in January 2018 at the Sundance Festival. They had 5 PCs, one for each of four D435 cameras, and a 5th PC that all the point cloud data was sent to for combining and final processing.

 

https://www.intelrealsense.com/intel-realsense-volumetric-capture/

 

Also, here is a far more advanced example of volumetric capture at Intel Studios near LAX airport, where they captured mass dancers performing a song from Grease:

 

https://www.youtube.com/watch?v=G0XUgnPl9KQ

View solution in original post

0 Kudos
5 Replies
MartyG
Honored Contributor III
1,435 Views

1. Intel stated in a webinar session about multiple cameras that although Vicalib can be used to align point clouds, "there is a simpler approach, which will work in 90% of cases. This is is to take the point cloud from every one of the cameras and then do an Affine Transform. Basically, just rotate and move the point clouds, in 3D space, and then once you've done that, you append the point clouds together and just have one large point cloud".

 

Another approach to point cloud stitching of multiple cameras was used by the CONIX Research Center at Carnegie Mellon, who used an ethernet-based system to stitch point clouds.

 

https://forums.intel.com/s/question/0D50P00004Fjfd9SAB/a-conix-research-center-program-for-combining-point-clouds-from-multiple-400-series-cameras?language=en_US

 

2. When creating a point cloud, the RGB can be used to optionally add color texture to the cloud to create a colored point cloud, but is not a requirement.

0 Kudos
vhanded
Novice
1,435 Views

@MartyG​ 

 

Thanks for your reply. I had read that crop of the webinar for a few times. Basically, doing affine transform to each 4 cameras 'carefully' is too time consuming for us, so we are looking for reliable and robust method for calibration. Vicalib suppose to fit the job, but the calibration result is not up to standard, with quite unacceptable error.

 

The CONIX research Theodolite for calibration, which is not feasible for us too.

 

This problem is quite surprisingly difficult to find reliable solution.

0 Kudos
MartyG
Honored Contributor III
1,435 Views

As a next step, you might like to investigate the link below, which gives guidance on using ROS to create a unified point cloud with two or with three cameras. It also gives advice on fine-tuning camera calibration.

 

https://github.com/IntelRealSense/librealsense/issues/2531

0 Kudos
vhanded
Novice
1,435 Views

While struggling with vicalib, I found 1 very interesting point. Do rolling shutter on D410 affect the accuracy output? Since it might distort the calibration grid board while I move it around. I tried a much slower movement, and the result improved slightly, but not good enough yet.

 

Which camera that intel use for volumetric capture? D415 with rolling shutter or D435 with global shutter?

0 Kudos
MartyG
Honored Contributor III
1,436 Views

D415 has higher accuracy than D435. The D410 is D415-type technology (models 400, 410 and 415 are rolling shutter, and models 420 and 430 are D435-like global shutter). The global-shutter models are best suited to tracking fast motion though, as the slower rolling shutter can cause artifacts such as smearing on the image when observing an object that is moving quickly.

 

Typically, Intel uses D435 when demonstrating RealSense applications where there are going to be movement. This includes volumetric capture, or mounting to a mobile robot.

 

Intel did a volumetric capture demo in January 2018 at the Sundance Festival. They had 5 PCs, one for each of four D435 cameras, and a 5th PC that all the point cloud data was sent to for combining and final processing.

 

https://www.intelrealsense.com/intel-realsense-volumetric-capture/

 

Also, here is a far more advanced example of volumetric capture at Intel Studios near LAX airport, where they captured mass dancers performing a song from Grease:

 

https://www.youtube.com/watch?v=G0XUgnPl9KQ

0 Kudos
Reply