I know that this question has been asked here before and there is a reply discussing to create a secondary mesh, but the answer is too vague (or maybe it's just that I don't understand it) and the way this is handled in the Interpolation tutorial is very obscure to me and I cannot understand how it works.
What I want to do is to work with objects that can be either:
* Triangle meshes (rtcNewTriangleMesh2)
* Quad meshes (rtcNewQuadMesh2)
* Subdiv meshes made of a mixture of triangles and quads (rtcNewSubdivisionMesh2)
* Hair objects (rtcNewBezierHairGeometry2)
For each of those objects, after a ray intersection where I get ray.u and ray.v, I want to interpolate some per-face/vertex UV coordinates. So I was wondering if to do the interpolation myself or (probably better) to use rtcInterpolate to do it.
Unfortunately rtcInterpolate is only for Vertex data and not for Face/Vertex data. So, I was considering making these secondary auxiliary meshes but I don't really understand how to do it. Also, I'm not too happy having to duplicate (or more) the RAM needed to store Embree objects.
Perhaps for triangle/quad and subdiv meshes made of tri/quads it's not necessary to use rtcInterpolate? Or due to performance reasons it's better to use it despite duplicating RAM?
For triangles and quads best implement the implementation yourself, thus will give higher performance.
For subdivision surfaces, face varying interpolation is now supported, thus there is no need to create these dummy subdiv meshes anymore. See Face-Varying-Data section of the Embree documentation http://embree.github.io/api.html. You essentially have to create a separate index buffer when you have face varying data, and attach that index buffer to the user vertex buffer to interpolate.
Thank you for your answer. However, I still find it difficult to understand.
For example, let's imagine I have a Quad object (or a SubDiv object where all faces are quads) that is a Cube.
The cube would have 6 faces and 8 vertices, so I would create it, for example with:
unsigned geomID = rtcNewSubdivisionMesh2(scene, RTC_GEOMETRY_STATIC, 6, 6*4, 8, 0, 0, 0, 1);
So, I create my Faces buffer  with the 6 entries = 4, my Index buffer  with the 4 vertices in each face and then the Vertex buffer  with the 8 vertices. Up to here, all is ok and works correctly.
However, if I want to create UV coordinates that are not just per-vertex but per-face+vertex I need to create a User Vertex Buffer of 6 faces x UV data for 4 vertices each = UV data for 24 vertices. In this case my Index buffer+1 used for this topology would still have 6 entries as before, but this time the UV data for the vertices could be each of the 24 vertices and therefore I get a segfault because the object was created with room for 8 vertices and not with 24 vertices.
Alternatively I could create the object with 24 vertices but how? Can I just leave the "unused" 16 vertices without initializing them? Would that create problems somewhere?
Thanks and Best regards!
To setup the per-face data to interpolate you would essentially do the following additionally to setting up the geometry:
rtcSetBuffer(geom, RTC_INDEX_BUFFER0+1, mesh->texcoord_indices, 0, sizeof(unsigned int), mesh->numEdges);
rtcSetBuffer(geom, RTC_USER_VERTEX_BUFFER0, mesh->texcoords, 0, sizeof(Vec2f), mesh->numTexCoords);
rtcSetIndexBuffer(geom, RTC_USER_VERTEX_BUFFER0, RTC_INDEX_BUFFER0+1);
The texcoords can have different index buffer and different buffer with texcoords. The number of texcoords does NOT have to be the same as the number of positions/vertices. In your example the texcoord_indices could contain the values 0,1,2,3, 4,5,6,7, 8,9,10,11, ... in case each face has its own texcoords, and no texcoords are shared between faces. The texcoords buffer would contain 24 texture coordinates.
Also have a look at the ConvertSubdivMesh function in scene_device.cpp. There is also a debug scene includes that shows the feature ./viewer -c tutorials/models/cylinder.ecs