Client
Interact with Intel® product support specialists on client concerns and recommendations
40 Discussions

Path Tracing a Trillion Triangles

Anton_Sochenov
Employee
0 0 10.4K

This technical deep dive was written by Anton Sochenov, Manu Mathew Thomas, Cristiano Siqueira, Gabor Liktor, and Akshay Jindal as part of their research efforts at Visual Compute and Graphics lab, within Intel Labs.

Highlights:

  • Reducing the cost of path tracing to achieve real-time performance is a significant challenge and an active area of research in both industry and academia. In this series of blog posts, we share our practical findings on a real time path tracing of the animated one trillion-triangle Jungle Ruins scene achieving 30FPS at 1440p on Intel Arc B580 GPU.
  • This blog series focuses on the practical application of one sample per pixel (1spp) denoising and supersampling, the metric used for visual quality evaluation, handling animations in the high-complexity scene made of 1 trillion instanced triangles, tradeoffs of content creation, and performance.

Fig1_Jungle_Ruins_Teaser.png
Figure 1. Fully path traced in real-time: the animated trillion-triangle
Jungle Ruins scene.


Modern AAA game titles are pushing the boundaries of visual experiences. The cost of production grows proportionally, as content creation and rendering visual effects require an ever-increasing effort to achieve life-like quality.

Fig2-AC_RED_147_Steam_NaoeRHP.jpg
Figure 2. Assassin's Creed Shadows.

A significant portion of the cost comes from 3D scenes, which influences other aspects of production. It has been around for decades, and a lot of effort was spent to rasterize triangles at maximum performance. Intrinsically, rasterization is only aware of a 3D scene’s setup and does not account for how light travels and interacts in the real world, resulting in accurate but not lifelike quality. To address this, a lot of clever techniques were created to simulate appearance of light effects, each with its own quality/performance tradeoffs, adding overhead to content creation pipelines, development, and testing.

Fig3_AO.png
Figure 3. Screen Space Ambient Occlusion.
Finding next gen: CryEngine 2 by Martin Mittring

Fig4_SSR.png
Figure 4. Screen Space Reflections.
Scalable Real-Time Ray Tracing by Adam Lake

Path tracing is another method of rendering a 3D scene that has been around for a long time. It is designed to accurately simulate how light travels, bouncing between 3D objects to produce natural-looking visuals in a straightforward way. However, such quality comes at a high computational cost and, hence, time, as a naive approach requires a lot of rays to be cast and bounced throughout the scene.

Fig5_PTvsRaster.png
Figure 5. Left: Cycles (Path Tracing),
blender.org. Right: EEVEE Next (Rasterization + RT), Agent 327 Barbershop.

Reducing the cost of path tracing to achieve real-time performance is a significant challenge and an active area of research in both industry and academia. Many effective solutions have been developed, each focusing on different aspects of this challenge. In a series of blog posts, we would like to share our practical findings on a real time path tracing of the animated one trillion-triangle Jungle Ruins scene achieving 30FPS at 1440p on Intel Arc B580 GPU.

Structure of The Series

In this series, we will focus on the practical application of one sample per pixel (1spp) denoising and supersampling, the metric used for visual quality evaluation, handling animations in the high-complexity scene made of 1 trillion instanced triangles, tradeoffs of content creation, and performance.

Denoising

Fig6_NoiseDenoised.png
Figure 6. 1spp noisy image vs denoised and supersampled

Performance and image quality are proportional to the number of rays at each stage of the path tracing. To save on compute and memory traffic we use 1spp and 1 ray on every bounce. Due to the stochastic nature of path tracing, the rendered image has significant noise. Each pixel is determined by a single random light path, causing extreme fluctuations in brightness and color, especially in complex lighting scenarios such as indirect illumination, caustics, soft shadows, etc. To remove noise and reconstruct details, we use our spatiotemporal joint neural denoising and supersampling model.

Check out our blog on denoising to dive into the details.

Content Creation

Fig7_ContentCreation.png
Figure 7. Jungle Ruins in RPTR Framework

The recently released Jungle Ruins scene lies at the heart of this project. It intertwines multiple open problems in large-scale real-time path tracing, such as very high-frequency detail with dynamic animation, massive instancing, and alpha testing combined in an open-world game-level size. The scene utilizes advanced procedural generation techniques to create much of its vegetation and terrain elements, alongside world partitioning strategies focused on manageability and scalability in both the creation and consumption of content by the research team.

For more details, see the Unveiling the Enchantment of Jungle Ruins blog post on the creative vision and artistic process.

For a deeper dive into the practical technical art considerations, see the new Jungle Ruins Scene: Technical Art Meets Real-Time Path-Tracing Research blog post.


Dynamic Geometry

Fig8_ASPartitions.png
Figure 8. Visualization of the partitioning of instances into TLAS fragments.

The large-scale open world scene featured in Jungle Ruins poses further challenges for path tracing, due to its geometric complexity. Millions of dynamic instances of meshes need to be animated, which requires updating the acceleration structures prior to ray tracing. The two-level accelerating structures defined by modern ray tracing APIs do not scale well with this complexity. While the animation of the foliage can be efficiently amortized on a per-mesh level (BLAS), the high number of instances make the full update of the top-level acceleration structure (TLAS) prohibitively costly. To this end, we demonstrate a solution that partitions the TLAS into subsets (AS fragments) that could be updated independently at a fraction of the global TLAS update.

Check back soon for a technical deep dive on Dynamic Geometry as part of this blog series.

Quality Evaluation

Denoising the Jungle Ruins scene under strict computational constraints requires careful tradeoffs between visual quality and performance. To better understand the impact of key design decisions—such as network hyperparameters, training data distribution, and model size—on the perceived quality of the final render, we rely on objective video quality metrics.

To support this, we developed a novel video quality metric that leverages the perceptual uniformity of a pre-trained 3D-CNN feature space. This metric is calibrated using subjective data collected through user studies evaluating the visual quality of path-traced and denoised video sequences.

Check back soon for a technical deep dive on Quality Evaluation as part of this blog series.

Acknowledgements

These results would not have been possible without the amazing people who contributed to research and/or engineering aspects of this project. We would like to acknowledge Tobias Zirr, Johannes MengAttila ÁfraMiroslaw Pawlowski, Anis Benyoub, Thomas Chambon, Radosław DrabińskiCharlene TeetsChristoph Peters, and Traverse Research for their expertise, dedication, and innovative thinking.

Tags (2)
About the Author
I support the Real-Time Graphics Research team at Intel, focusing on a combination of classical and novel neural technologies that push the boundaries of industrial research. Previously, I led the software engineering team within the Meta Reality Labs' Graphics Research Team, working on a graphics stack for socially acceptable augmented reality (AR) glasses. Before that, I was bringing telepresence technology to Microsoft HoloLens AR headset.