Artificial Intelligence (AI)
Discuss current events in AI and technological innovations with Intel® employees
491 Discussions

Intel Labs Details Latest Innovations in Deep Learning at ICLR 2023

ScottBair
Employee
2 0 9,140

Scott Bair is a key voice at Intel Labs, sharing insights into innovative research for inventing tomorrow’s technology.

 

Highlights:

  • The International Conference on Learning Representations (ICLR) 2023 will run from May 1st through 5th in Kigali, Rwanda.
  • Intel Labs’ innovations in model linearization include a three-stage training method that trains a DNN model with significantly fewer rectified linear units (ReLUs) driven by a novel measure of non-linearity layers’ ReLU sensitivity.
  • To further graph learning, Intel Labs also presents a novel end-to-end neural model for learning compositional logical rules, and the first meta-learning approach for
    evaluation-free graph learning model selection. Read the paper abstracts below for more information.

 

The International Conference on Learning Representations (ICLR) is a worldwide gathering of professionals dedicated to the advancement of AI, more specifically, representation learning or deep learning. The 2023 conference will run from May 1st through 5th in Kigali, Rwanda. Intel Labs is proud to present its latest innovations in model linearization and graph learning at this years’ conference.

Works from Intel Labs include a novel measure of non-linearity layers’ rectified linear unit (ReLU) sensitivity-driven three-stage training method that trains a model with significantly fewer ReLUs, a novel end-to-end neural model for learning compositional logical rules, and the first meta-learning approach for evaluation-free graph learning model selection. Read the paper abstracts below for more information.

 

Learning To Linearize Deep Neural Networks For Secure And Efficient Private Inference

The large number of rectified linear unit (ReLU) non-linearity operations in existing deep neural networks makes them ill-suited for latency-efficient private inference (PI). Existing techniques to reduce ReLU operations often involve manual effort and sacrifice significant accuracy. This paper presents a novel measure of non-linearity layers’ ReLU sensitivity, enabling mitigation of the time-consuming manual efforts in identifying the same. Based on this sensitivity, researchers then present SENet, a three-stage training method that, for a given ReLU budget, automatically assigns per-layer ReLU counts, decides the ReLU locations for each layer’s activation map, and trains a model with significantly fewer ReLUs to potentially yield latency and communication-efficient PI. Experimental evaluations with multiple models on various datasets show SENet’s superior performance both in terms of reduced ReLUs and improved classification accuracy compared to existing alternatives. In particular, SENet can yield models that require up to ∼2× fewer ReLUs while yielding similar accuracy. For a similar ReLU budget, SENet can yield models with ∼2.32% improved classification accuracy, evaluated on CIFAR-100.

 

MetaGL: Evaluation-Free Selection of Graph Learning Models via Meta-Learning

Given a graph learning task, such as link prediction, on a new graph, how can we select the best method as well as its hyperparameters (collectively called a model) without having to train or evaluate any model on the new graph? Model selection for graph learning has been largely ad hoc. A typical approach has been to apply popular methods to new datasets, but this is often suboptimal. On the other hand, systematically comparing models on the new graph can quickly become too costly, or even impractical. This work presents the first meta-learning approach for evaluation-free graph learning model selection, called METAGL, which utilizes the prior performances of existing methods on various benchmark graph datasets to automatically select an effective model for the new graph, without any model training or evaluations. To quantify similarities across a wide variety of graphs, Intel Labs introduces specialized meta-graph features that capture the structural characteristics of a graph. Then researchers designedwe G-M network, which represents the relations among graphs and models, and develop a graph-based meta-learner operating on this G-M network, which estimates the relevance of each model to different graphs. Extensive experiments show that using METAGL to select a model for the new graph greatly outperforms several existing meta-learning techniques tailored for graph learning model selection (up to 47% better), while being extremely fast at test time (∼1 sec).

 

Neural Compositional Rule Learning for Knowledge Graph Reasoning

Learning logical rules is critical to improving reasoning in knowledge graphs. This is due to their ability to provide logical and interpretable explanations when used for predictions, as well as their ability to generalize to other tasks, domains, and data. While recent methods have been proposed to learn logical rules, the majority of these methods are either restricted by their computational complexity and cannot handle the large search space of large-scale knowledge graphs (KGs) or show poor generalization when exposed to data outside the training set. This paper proposes an end-to-end neural model for learning compositional logical rules called NCRL. NCRL detects the best compositional structure of a rule body and breaks it into small compositions in order to infer the rule head. By recurrently merging compositions in the rule body with a recurrent attention unit, NCRL finally predicts a single rule head. Experimental results show that NCRL learns high-quality rules, as well as being generalizable. Specifically, this paper shows that NCRL is scalable, efficient, and yields state-of-the-art results for knowledge graph completion on large-scale KGs. Moreover, researchers tested NCRL for systematic generalization by learning to reason on small-scale observed graphs and evaluating on larger unseen ones.

 

Tags (2)
About the Author
Scott Bair is a Senior Technical Creative Director for Intel Labs, chartered with growing awareness for Intel’s leading-edge research activities, like AI, Neuromorphic Computing and Quantum Computing. Scott is responsible for driving marketing strategy, messaging, and asset creation for Intel Labs and its joint-research activities. In addition to his work at Intel, he has a passion for audio technology and is an active father of 5 children. Scott has over 23 years of experience in the computing industry bringing new products and technology to market. During his 15 years at Intel, he has worked in a variety of roles from R&D, architecture, strategic planning, product marketing, and technology evangelism. Scott has an undergraduate degree in Electrical and Computer Engineering and a Masters of Business Administration from Brigham Young University.