Santiago Miret is an AI research scientist at Intel Labs, where he focuses on developing artificial intelligence solutions and exploring the intersection of AI and the physical sciences.
Highlights:
- Intel and Mila collaborated on FAENet, a new data-centric model paradigm that improves both modeling and compute efficiency across different types of materials modeling datasets.
- FAENet holds great potential for accelerating the large-scale evaluation of 3D materials properties for materials design, which is essential to applications such as low carbon energy, sustainable agriculture, and drug discovery.
- The research paper collaboration by Intel Labs and Mila’s David Rolnick and Yoshua Bengio groups was accepted at the International Conference on Machine Learning (ICML 2023).
Through their continued efforts to solve real-world problems using advanced artificial intelligence (AI), Intel and the Mila - Quebec AI Institute have created FAENet (Frame Averaging Equivariant Network), a new graph neural network (GNN) data-centric model paradigm that improves both modeling and compute efficiency across different types of materials modeling datasets. FAENet holds great potential for accelerating the large-scale evaluation of 3D materials properties for materials design, which is essential to applications such as low carbon energy, sustainable agriculture, and drug discovery. This geometric deep learning (GDL) model speeds up time-consuming mathematical computations that ensure the model architecture respects invariant physical symmetries that commonly occur in the real world, such as mirror reflections and translational pattern symmetries that must remain constant when shifted. The research paper collaboration by Intel Labs and Mila’s David Rolnick and Yoshua Bengio groups was accepted at the International Conference on Machine Learning (ICML 2023).
Even measuring materials properties in a quick and efficient manner can cause a major bottleneck in the discovery of new materials. While powerful computational chemistry techniques have shown great promise in modeling materials properties in simulation and thereby circumventing expensive and potentially inefficient experimental procedures, many computations still require long calculation times, often on the order of multiple days or weeks to properly determine relevant properties of complex modern materials. Given the expensive nature of computational chemistry, researchers at the intersection of diverse scientific fields and AI have developed various datasets and methods leading up to the creation of a new subfield of machine learning called geometric deep learning. GDL can model real-world materials with advanced AI tools to significantly reduce the time needed to evaluate the properties of a wide range of materials.
Using FAENet to Accelerate Materials Properties Evaluations
Figure 1. FAENet data-centric modeling workflow. The data is first reduced in dimensionality using principal component analysis (PCA), an established dimensionality reduction method, and then processed into a canonical format using the mathematics of frame-averaging. This preserves desired symmetries for greater modeling performance and efficiency.
The key innovation behind FAENet is encoding the desired symmetric in a data-centric way instead of encoding it in the model architecture. In FAENet, the dimension of the input data is first reduced using an established method and then processed into a unique, canonical frame using the mathematics of frame-averaging. Frame-averaging mathematically guarantees that many of the desired symmetries of the data are respected without having to constrain the model architecture in any way. To further increase the computation efficiency of the model, we propose stochastic frame-averaging, which is even faster than regular frame-averaging and works just as well for materials modeling in practice.
Figures 2 and 3. On left, FAENet modeling performance on predicting energy for catalytic materials. FAENet is both faster and better than a set of baseline methods. On right, FAENet performance improvement for a common molecular property prediction benchmark. FAENET is as fast as the best baseline while providing significant modeling improvements.
As shown in Figures 2 and 3, FAENet sets a state-of-the-art by exceeding the modeling performance and computational speed of many common baselines. Its data-centric paradigm allows FAENet to design a significantly more efficiently model.
Enabling CPU-Based Training for GDL Methods
By using machine learning models for materials property modeling, this data-centric paradigm unlocks the potential to train various kinds of advanced GDL methods on CPU-centric hardware, making advanced AI accessible to more researchers and practitioners. In particular, the Intel Labs and Mila team recently published a paper on PhAST, which includes a series of algorithmic innovations that make training GDL for materials property prediction significantly quicker and more accessible.
Figures 4 and 5. On left, Intel 4th Generation Xeon (SPR) training throughput for geometric deep learning training on materials property prediction task (Duval et al., 2022). On right, comparison of Intel 4th Generation Xeon (SPR) to A100 GPU on geometric deep learning model training.
Figures 4 and Figure 5 show results of using CPU-based training of MegNet, a chemistry inspired deep learning model developed by UCSD and implemented in collaboration with Intel Labs, on the Open Catalyst Project force prediction task. This task requires high throughput of structural materials data and lends itself to being CPU friendly.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.