Community
cancel
Showing results for 
Search instead for 
Did you mean: 
dwang40
Beginner
140 Views

How to stitch two 3D .obj models into one unified one?

Hi,

   Can anybody help me or any suggestion? Thanks in advance.  

   I am now using the RealSense R200 camera to capture some 3D obj models against the same big object. Due to one 3D model just cover one area of the object, so I need to stitch all of those obj models into one unified obj. So the first task to me is to try to stitch two obj models to one model. Now problem to me include:

1. What is the Intel Realsense obj file format?   WHen open the captured obj file from R200, we can find that it has 6 data at every “v” line, while a typical obj file just has 3 data at every "v" line. So what are those additional 3 data?  

For example: 

# fusionsdk
#g Polygonal_Model_1 (null)
# Number of geometric vertices: 10196
v -0.1396005 0.6051342 -0.2114514 0.407 0.451 0.486
v -0.1336424 0.6064607 -0.2148386 0.410 0.452 0.487
v -0.1396076 0.6063191 -0.2072744 0.401 0.444 0.476
v -0.1347799 0.6073068 -0.2103782 0.407 0.450 0.480

2. If I want to stitch two obj models, how can I do so? Is there any sample codes?  

  Currently, I plan to caculate the relevence between two obj models and then locate their connecting curve. Then I will try to reconstruct the new model with those two overlapped obj models by using the connecting curve. But this plan depends on I am having a right understanding and parsing to the obj file format, as well as a proper algorithm for the stitching.

 This is a totally new to me.I am really appreciate you if there is any suggestion or sample codes. Thanks.  

 

Regards,

flysharkwang.

0 Kudos
3 Replies
samontab
Valued Contributor II
140 Views

First of all, what you are trying to do is not a trivial task. It's not available as a ready to use module that works in all scenarios. The more you can constrain your application, the easier it would be to create a sensible output.

Ah, 3D data formats. They come in different shapes, forms, flavours, and varieties. This is probably the type of data with more standards and non standard formats available. Because 3D means different things to different people, 3D data formats vary wildly. Points, vertices, edges, faces, normals, intensity, colour, features, textures, lights, range, focal length, etc. These are all things that may or may not be present in a file containing 3D information. Just be very careful when reading/writing a specific format, don't assume anything, and try to be as consistent as possible.

Stitching together different 3D models sounds easy, yet it can be really hard in real life. It's a large dimensional search space. At the bare minimum you have 6 dimensions to search for: X, Y, Z, Yaw, Pitch, Roll. That's assuming the data is perfect, which is not. You also have to consider noise, lack of depth information in certain areas of the model, and what are you going to use to do the matching?, just position?, IR data?, colour?, a specific feature?, a learned feature?. Plus, if you are using cameras, you need to consider and model the lens distortion as well.

Then, you need to start deciding which vertices connect to each other. Or maybe you create new artificial ones, or maybe you simplify the model removing vertices. And textures, what textures should you use?, also, don't forget to blend the textures to get a more realistic result. Maybe you want to smooth the end result to make it look less edgy, but then it may look too soft/rounded. And don't forget to make it watertight!.

Also, remember that all of this should run in real time or the investors won't like the prototype.

Finfa811
Novice
140 Views

Hi David,

 

1. I have an open question about the R200  .obj format (https://software.intel.com/en-us/forums/realsense/topic/624443) but nobody answered me so far. I realized that format is v x y z r g bbeing x y z the world coordinates in a right-hand system: x+ right, y+ down, z+ to the world, with origin (0,0,0) the first pose of the camera, and I guess the values are in meters (not sure about that). Anyway you can change it to left-hand system, there is a flag in the Intel SDK.  r g b are normalized texture values between [0-1], so in your example all points are gray.

2. That's not an easy task, but of course it is possible. I recommend you to use the point cloud library (http://pointclouds.org/). You can create your own converter .obj/.pcd/.obj now that you know the format. You'll also need to keep track and map the faces between local models and global one. Try converting your models to point clouds and implementing a pipeline of 3D registration, in the PCL you'll find several 3D keypoint detectors/feature descriptors that will be useful for you, and you'll have to choose one depending on your needs, because for every case it's a different problem. Anyway if you work at point cloud level every statistical algorithm will be easier to apply, and most functions that you need (RANSAC, ICP...) are already implemented. Pay attention to scale between local models, but if you do the acquisition of the data with the same sensor (R200) the scale should be the same, so this will make your task easier.

 

Hope it helps.

dwang40
Beginner
140 Views

Hi Samontab and Finfa811,

   I appreciate your kinds suggestions and advise very much, which really help me have a more clear understanding to the task. Since I failed to login the forum recently always, that make me cannot response your commnets on time. I am now trying the stitching. Thanks. 

  Regards,

  David

Reply