Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted
Beginner
36 Views

3D Measurements

Jump to solution

Hi, I am not sure if I am in a right place, apologies if I am.

We are looking to create a system, which would measure objects of different shapes and create their detailed 3D pattern map. Items such as small jewellery, wine bottles, tennis rackets, toys, etc. are amongst the most common types to be measured.

We have a concept to create a frame with an internal space of approximately 40 cm (width) x 40 cm (height) x 80 cm (length) into which the items will be placed and measured. We believe that placing ToF cameras in four top corners (and maybe one camera in the top centre) will assist in capturing the required data from most of the angles needed and minimize the size of the overall structure

Ideally we are interested in ToF camera with a 90degree angle, which we believe is the most suited for this concept. Our interest is to capture volume, length, width and height of the object placed into the measurement space.

Just curious if real sense technology might be used here and what its limitations?

Any suggestions and recommendations will be highly appreciated. 

Thank you.

0 Kudos

Accepted Solutions
Highlighted
Valued Contributor II
36 Views

First of all, realsense does not use ToF technology, that's the Kinect 2, or lidars for examples.

RealSense uses structured light. This means that it projects a pattern, which may be static or dynamic, and then using an image it generates a depth map based on the position of the pattern projected on the scene.

Now, in the scenario that you are interested, independent of the technology you use (ToF, structured light, etc), as long as it is active, you will have some degree of noise from the other cameras, since their field of view overlap. You can attenuate this by multiplexing the signals over time, or carefully positioning the cameras over the space to reduce or remove the overlap.

Since you want to stitch together the information from the cameras, it will be easier to have some overlap, so multiplexing over time seems to be the best solution.

Now, Assuming you have everything setup correctly, the real fun begins. You need to calibrate the cameras, and stitch their information together. At this stage, all we have are a set of calibrated 3D points. Now you need to turn that into a 3D model.

This sounds way easier than it actually is. The brain is great for interpreting 3D shapes from a couple of points floating in the air, but computers don't have that ability. It's hard for them. Also, 3D data is noisy, specially on the edges, some materials do not reflect IR that much, and so on. This means that your stitched 3D data will have a lot of holes. You need to process that model and create a water tight mesh version of it, which involves many steps, such as connecting the 3D dots coming from the cameras into edges, and creating a wireframe, with its associated vertex, edges, faces and textures from the color camera.

If it sounds complicated, it's because it is. Companies are being built this day that only care about that. Automation of this process is really hard for many different reasons. You can definitely have something, but having a real, useful, easy to use 3D scan as we have 2D scanners, is still not done yet. It would be very useful actually to have it.

View solution in original post

0 Kudos
3 Replies
Highlighted
Valued Contributor II
37 Views

First of all, realsense does not use ToF technology, that's the Kinect 2, or lidars for examples.

RealSense uses structured light. This means that it projects a pattern, which may be static or dynamic, and then using an image it generates a depth map based on the position of the pattern projected on the scene.

Now, in the scenario that you are interested, independent of the technology you use (ToF, structured light, etc), as long as it is active, you will have some degree of noise from the other cameras, since their field of view overlap. You can attenuate this by multiplexing the signals over time, or carefully positioning the cameras over the space to reduce or remove the overlap.

Since you want to stitch together the information from the cameras, it will be easier to have some overlap, so multiplexing over time seems to be the best solution.

Now, Assuming you have everything setup correctly, the real fun begins. You need to calibrate the cameras, and stitch their information together. At this stage, all we have are a set of calibrated 3D points. Now you need to turn that into a 3D model.

This sounds way easier than it actually is. The brain is great for interpreting 3D shapes from a couple of points floating in the air, but computers don't have that ability. It's hard for them. Also, 3D data is noisy, specially on the edges, some materials do not reflect IR that much, and so on. This means that your stitched 3D data will have a lot of holes. You need to process that model and create a water tight mesh version of it, which involves many steps, such as connecting the 3D dots coming from the cameras into edges, and creating a wireframe, with its associated vertex, edges, faces and textures from the color camera.

If it sounds complicated, it's because it is. Companies are being built this day that only care about that. Automation of this process is really hard for many different reasons. You can definitely have something, but having a real, useful, easy to use 3D scan as we have 2D scanners, is still not done yet. It would be very useful actually to have it.

View solution in original post

0 Kudos
Highlighted
Valued Contributor II
36 Views

You can create a 3D model using one RealSense camera and either moving the camera around the object or keeping the camera fixed and rotating the object. In my experience, the resolution and colour reproduction aren't great however so your resultant model may have some weird geometry or blotchy textures (though the latter may be solvable with good, consistent lighting). Large, simpler objects like wine bottles should be fine but I'd imagine it would get confused by the strings of a tennis racquet and a small item by itself (<2cm say) may blend into the background. It is very easy to set up and use though: using the sample code as a reference, you could build the scanning part of your app in a matter of hours. The only thing you would have to work out for your application would be calculating the dimensions of the object.

0 Kudos
Highlighted
Beginner
36 Views

Thank you guys, appreciate your responses. I will try to look into what cameras are available in the market and take it from there.

 

Thanks a lot.

0 Kudos