Artificial Intelligence (AI)
Discuss current events in AI and technological innovations with Intel® employees
546 Discussions

Intel Presents Six Papers on Novel AI Research at ICML 2023

1 0 7,927

Scott Bair is a key voice at Intel Labs, sharing insights into innovative research for inventing tomorrow’s technology.



  • Intel presented six papers accepted at the 40th International Conference on Machine Learning (ICML) 2023, happening now through July 29.
  • Two papers were selected as spotlight oral papers at the conference: ProtST, which uses a ChatGPT-style design interface for protein design, and and Settling the Reward Hypothesis, which explores the design of goals in reinforcement learning.


Intel Labs had five papers accepted at the 40th International Conference on Machine Learning (ICML) 2023, happening now through July 29. Among the five papers on artificial intelligence (AI) for science, model-based reinforcement learning, and generative AI in 3D computer vision, two papers were selected as spotlight oral papers at the conference. The first spotlight oral paper focuses on ProtST, which uses a ChatGPT-style design interface for protein design to enhance protein sequence pre-training and understanding of biomedical texts by protein language models. The second spotlight oral paper highlights Settling the Reward Hypothesis, which explores the design of goals in reinforcement learning. 


AI for Science

Spotlight Oral Paper: ProtST: Multi-Modality Learning of Protein Sequences and Biomedical Texts

Current protein language models (PLMs) learn protein representations mainly based on their sequences, and can capture co-evolutionary information. However, the models are unable to explicitly acquire protein functions, which is the end goal of protein representation learning. Fortunately, the textual property descriptions are available for many proteins with their various functions described. The ProtDescribe dataset was built to augment protein sequences with text descriptions of their functions and other important properties. Based on this dataset, researchers propose the ProtST framework to enhance protein sequence pre-training and understanding of biomedical texts. During pre-training, three types of tasks were designed, such as unimodal mask prediction, multimodal representation alignment, and multimodal mask prediction to enhance a PLM with protein property information with different granularities and, at the same time, preserve the PLM’s original representation power. On downstream tasks, ProtST enables both supervised learning and zero-shot prediction. The performance of ProtST-induced PLMs over previous ones on diverse representation learning benchmarks is verified. Under the zero-shot setting, the effectiveness of ProtST on zero-shot protein classification is shown. ProtST also enables functional protein retrieval from a large-scale database without any function annotation.

FAENet: Frame Averaging Equivariant GNN for Materials Modeling

Applications of machine learning techniques for materials modeling typically involve functions that are known to be equivariant or invariant to specific symmetries. While graph neural networks (GNNs) have proven successful in such applications, conventional GNN approaches that enforce symmetries via the model architecture often reduce expressivity, scalability, or comprehensibility. Researchers introduce 1) a flexible, model-agnostic framework based on stochastic frame averaging that enforces E(3) equivariance or invariance, without any architectural constraints, and 2) FAENet: a simple, fast, and expressive GNN that leverages stochastic frame averaging to process geometric information without constraints. We prove the validity of our method theoretically and demonstrate its high accuracy and computational scalability in materials modeling on the OC20 dataset (S2EF, IS2RE) as well as common molecular modeling tasks (QM9, QM7-X).

Multi-Objective GFlowNets

Researchers study the problem of generating diverse candidates in the context of multi-objective optimization. In many applications of machine learning such as drug discovery and material design, the goal is to generate candidates that simultaneously optimize a set of potentially conflicting objectives. Moreover, these objectives are often imperfect evaluations of some underlying property of interest, making it important to generate diverse candidates to have multiple options for expensive downstream evaluations. The researchers propose Multi-Objective GFlowNets (MOGFNs), a novel method for generating diverse Pareto optimal solutions, based on GFlowNets. They introduce two variants of MOGFNs: MOGFN-PC, which models a family of independent sub-problems defined by a scalarization function with reward-conditional GFlowNets, and MOGFN-AL, which solves a sequence of sub-problems defined by an acquisition function in an active learning loop. Experiments on wide variety of synthetic and benchmark tasks demonstrate advantages of the proposed methods in terms of the Pareto performance and importantly, improved candidate diversity, which is the main contribution of this work.


Reinforcement Learning

Spotlight Oral Paper: Settling the Reward Hypothesis

The reward hypothesis posits that, "all of what we mean by goals and purposes can be well thought of as maximization of the expected value of the cumulative sum of a received scalar signal (reward)." Researchers aimed to fully settle this hypothesis. This will not conclude with a simple affirmation or refutation, but rather specify completely the implicit requirements on goals and purposes under which the hypothesis holds. 


Generative AI In 3D Computer Vision

NeRFool: Uncovering the Vulnerability of Generalizable Neural Radiance Fields Against Adversarial Perturbations

Generalizable Neural Radiance Fields (GNeRF) are one of the most promising real-world solutions for novel view synthesis, thanks to their cross-scene generalization capability and thus the possibility of instant rendering on new scenes. While adversarial robustness is essential for real-world applications, little study has been devoted to understanding its implication on GNeRF. Researchers hypothesize that because GNeRF is implemented by conditioning on the source views from new scenes, which are often acquired from the Internet or third-party providers, there are potential new security concerns regarding its real-world applications. Meanwhile, existing understanding and solutions for neural networks' adversarial robustness may not be applicable to GNeRF, due to its 3D nature and uniquely diverse operations. To this end, researchers present NeRFool to understand the adversarial robustness of GNeRF. Specifically, NeRFool unveils the vulnerability patterns and important insights regarding GNeRF's adversarial robustness. Built upon insights gained from NeRFool, researchers further develop NeRFool+, which integrates two techniques capable of effectively attacking GNeRF across a wide range of target views, and provide guidelines for defending against the proposed attacks.


Intelligent Edge

XLDA: Linear Discriminant Analysis for Scaling Continual Learning to Extreme Classification Settings at the Edge

Streaming Linear Discriminant Analysis (LDA), although proven in Class-incremental Learning deployments at the edge with limited classes (up to 1000), has not been proven for deployment in extreme classification scenarios. This paper presents: (a) XLDA, a framework for Class-IL in edge deployment where LDA classifier is proven to be equivalent to FC layer including in extreme classification scenarios, and (b) optimizations to enable XLDA-based training and inference for edge deployment where there is a constraint on available compute resources. The work shows up to 42 times speedup using a batched training approach and up to 5 times inference speedup with nearest neighbor search on extreme datasets like AliProducts (50k classes) and Google Landmarks V2 (81k classes).

Tags (2)
About the Author
Scott Bair is a Senior Technical Creative Director for Intel Labs, chartered with growing awareness for Intel’s leading-edge research activities, like AI, Neuromorphic Computing and Quantum Computing. Scott is responsible for driving marketing strategy, messaging, and asset creation for Intel Labs and its joint-research activities. In addition to his work at Intel, he has a passion for audio technology and is an active father of 5 children. Scott has over 23 years of experience in the computing industry bringing new products and technology to market. During his 15 years at Intel, he has worked in a variety of roles from R&D, architecture, strategic planning, product marketing, and technology evangelism. Scott has an undergraduate degree in Electrical and Computer Engineering and a Masters of Business Administration from Brigham Young University.