Artificial Intelligence (AI)
Discuss current events in AI and technological innovations with Intel® employees
815 Discussions

Argonne’s Aurora Supercomputer Helps Power Breakthrough Simulations of Quantum Materials

IntelAI
Employee
0 0 1,188

Quantum materials have the potential to transform future computing, energy and electronics technologies. But understanding and controlling their unusual electronic and magnetic properties requires some of the world’s most powerful computing hardware and software.

Using three U.S. Department of Energy (DOE) supercomputers, researchers from the University of Southern California (USC) and DOE’s Lawrence Berkeley National Laboratory developed new ways to model these complex systems with greater precision than ever before. The team worked with Aurora at Argonne National Laboratory, Frontier at Oak Ridge National Laboratory, and Perlmutter at Berkeley Lab. Together, they improved the open-source BerkeleyGW software to achieve a new level of accuracy in simulating the behavior of quantum materials.

To uncover what drives their properties, the researchers are using the code to track how electrons move and interact inside the materials – a task that pushes the limits of even the most advanced computers.

The team is conducting simulations at an unprecedented scale, going beyond a single type of calculation to advance the theoretical methods themselves. Their approach moves from static representations of electron behavior to dynamic simulations that couple the motion of electrons with that of the nuclei, marking an important step toward understanding phenomena such as superconductivity and the performance of transistors and optical devices.

Their work was named a finalist for the Association for Computing Machinery’s 2025 Gordon Bell Prize, which honors outstanding achievement in high-performance computing.

Aurora, built in partnership with Intel and Hewlett-Packard Enterprise, played a key role in the team’s groundbreaking research. Its large memory capacity and scalable architecture made it possible to carry out memory-intensive simulations of systems with tens of thousands of atoms. These runs allowed the researchers to capture quantum effects across larger and more complex systems.

Capturing Quantum Effects at New Scales

The unusual properties of quantum materials come from how their electrons move and interact with each other and with atomic vibrations called phonons. These combined effects, known as many-body interactions, control how a material conducts electricity, absorbs light and stores energy. Simulating these effects accurately has long been a major challenge.

Density functional theory (DFT) is one of the most widely used methods for studying electron behavior in materials, but because it relies on approximations that simplify complex interactions, it can miss important details and features. To simulate their behavior more precisely, the USC–Berkeley Lab team uses the GW approach with the BerkeleyGW code. The name “GW” comes from the two quantities it calculates. G measures the motion of an electron through a materials and W measures how electrons influence each other. Together, they provide a more realistic picture of how electrons interact, leading to more accurate predictions of a material’s properties.

For example, when modeling a material’s band gap (a fundamental property of semiconductors and insulators that determines how they absorb light), standard DFT methods can deviate from experimental results by 50 percent or more. In contrast, the GW approach reduces this error to only a few percent. That level of accuracy is crucial for understanding how materials like silicon, one of the most important for solar energy, absorb different wavelengths of sunlight.

Building on the GW approach, the team developed a capability called GW perturbation theory, or GWPT, which couples the key quantum interactions within a single framework. This capability enables simulations that were previously out of reach, allowing the team to predict properties that are critical for designing nanodevices, including how materials conduct electricity and manage heat.

The team’s work also expands the capability of the GW method to handle far larger and more complex systems. Using DOE’s exascale supercomputers, their simulations reached over one exaflops on Frontier and more than 0.7 exaflops on Aurora, establishing new benchmarks for performance and scale in quantum-mechanical calculations. One exaflops is equivalent to a quintillion, or a billion billion, calculations per second.

Building on a Decade of Progress

The researchers’ success builds on more than a decade of effort to make BerkeleyGW faster, more flexible and better suited to evolving computer architectures, including exascale systems equipped with graphics processing units (GPUs). Their earlier work with the code was recognized as a Gordon Bell Prize finalist in 2020.

The team began developing GPU-based capabilities roughly a decade ago, when such architectures were still uncommon. Through sustained optimization and adaptation, they learned how to ensure the software performs efficiently across new generations of hardware. That portability has allowed the team to optimize BerkeleyGW for vastly different computing platforms. The result is a code that performs efficiently on Intel (Aurora), AMD (Frontier) and NVIDIA (Perlmutter) GPUs, ensuring that it remains highly usable as architectures continue to evolve.

With the release of BerkeleyGW 4.0, the team’s improvements are now available to the broader research community, enabling more scientists to study complex materials on current and future supercomputers.

The study, “Advancing Quantum Many-Body GW Calculations on Exascale Supercomputing Platforms,” was authored by Benran Zhang, Chih-En Hsu and Zhenglu Li from USC; Daniel Weinberg, Steven Louie, Jack Deslippe and Mauro Del Ben from Berkeley Lab; Aaron Altman, Yuming Shi and Felipe da Jornada from Stanford University; James White III from Oak Ridge National Laboratory; and Derek Vigil-Fowler from National Renewable Energy Laboratory.

Cover Image Credit: Chih-En Hsu, USC

 

Notices and Disclaimers

Performance varies by use, configuration, and other factors. Learn more on the Performance Index site.
Performance results are based on testing as of dates shown in configurations and may not reflect all publicly available ​updates. See backup for configuration details. No product or component can be absolutely secure.
Your costs and results may vary.
Intel technologies may require enabled hardware, software, or service activation.
© Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.