Success! Subscription added.
Success! Subscription removed.
Sorry, you must verify to complete this action. Please click the verification link in your email. You may re-send via your profile.
Intel Labs Presents 31 Papers at NeurIPS 2023
12-06-2023
This year, Intel Labs presents 31 papers at NeurIPS, including 12 at the main conference. Contributi...
0
Kudos
0
Replies
|
Enabling In-Memory Computing for Artificial Intelligence Part 1: The Analog Approach
02-16-2023
In this two-part series, we will explore and evaluate digital and analog computing and highlight the...
4
Kudos
1
Replies
|
A Fresh Take on Neural Network Pruning from the Angle of Graph Theory
11-29-2023
Model pruning is arguably one of the oldest methods of deep neural networks (DNN) model size reducti...
1
Kudos
0
Replies
|
ULTRA: Foundation Models for Knowledge Graph Reasoning
11-29-2023
Training a single generic model for solving arbitrary datasets is always a dream for ML researchers,...
1
Kudos
0
Replies
|
Knowledge Retrieval Takes Center Stage
11-16-2023
GenAI Architecture Shifting from RAG Toward Interpretive Retrieval-Centric Generation (RCG) Models
0
Kudos
0
Replies
|
Accelerating Graph Neural Network Training on Intel CPU through Fused Sampling & Hybrid Partitioning
10-30-2023
Intel Labs and AIA developed a new graph sampling method called “fused sampling” that achieves up to...
9
Kudos
1
Replies
|
Intel Labs Researcher Spotlight: James Jaussi and Integrated Photonics
11-07-2023
James Jaussi, Senior PE and Director of the PHY Research Lab at Intel Labs, and his team perform int...
0
Kudos
0
Replies
|
Multi-Objective GFlowNet: Intel Labs, Mila and Recursion Collaborate on AI for Scientific Discovery
10-18-2023
Intel Labs, Mila, and Recursion Pharmaceuticals collaborated on Multi-Objective GFlowNets (MOGFNs), ...
0
Kudos
0
Replies
|
Intel Labs Releases Open MatSci ML Toolkit 1.0 for Training AI Models on Materials Science
10-09-2023
Intel Labs recently released the Open MatSci ML Toolkit version 1.0 on August 31, making training of...
1
Kudos
0
Replies
|
Intel Presents Latest Computer Vision Research at ICCV 2023
10-03-2023
Intel presents six computer vision works at he 2023 International Conference on Computer Vision (ICC...
0
Kudos
0
Replies
|
Intel Labs Presents Quantum Circuit Optimizations at IEEE Quantum Week
09-18-2023
This year’s IEEE International Conference on Quantum Computing and Engineering (QCE), or Quantum Wee...
0
Kudos
0
Replies
|
Accelerating Codegen training and inference on Habana Gaudi2
09-06-2023
Optimum Habana makes it easy to achieve fast training and inference of large language models (LLMs)...
1
Kudos
0
Replies
|
FAENet: Intel Labs and Mila Collaborate on Data-Centric AI Model for Materials Property Modeling
09-05-2023
Intel and Mila collaborated on FAENet, a new data-centric model paradigm that improves both modeling...
1
Kudos
0
Replies
|
Intel and Collaborators Present Latest Database Research at VLDB 2023
08-28-2023
Intel presents eight co-authored contributions across the research, industrial, and demonstrations t...
0
Kudos
0
Replies
|
MatSci-NLP: Intel Labs and Mila Collaborate on Benchmark to Assess Materials Science Language Models
08-23-2023
Intel and Mila collaborate on MatSci-NLP, the first broad benchmark for assessing the capabilities o...
1
Kudos
0
Replies
|
Intel Labs Selected by DARPA H6 to Develop Tactical-Grade Clock with Microsecond Timing Precision
08-21-2023
Intel Labs and collaborators from the University of Pennsylvania, Carnegie Mellon University, and IS...
0
Kudos
0
Replies
|
ProtST: Intel and Mila Collaborate on a New Multi-Modal Protein Language Model
08-16-2023
Intel and Mila collaborated on ProtST, a multi-modal protein language model that enables users to cr...
1
Kudos
0
Replies
|
Survival of the Fittest: Compact Generative AI Models Are the Future for Cost-Effective AI at Scale
07-25-2023
The case for nimble, targeted, retrieval-based models as the best solution for generative AI applica...
1
Kudos
0
Replies
|
Intel Presents Six Papers on Novel AI Research at ICML 2023
07-24-2023
Intel had six papers accepted at the 40th International Conference on Machine Learning (ICML) 2023, ...
1
Kudos
0
Replies
|
Intel Xeon is all you need for AI inference: Performance Leadership on Real World Applications
07-19-2023
Intel is democratizing AI inference by delivering a better price and performance forreal-world use c...
2
Kudos
2
Replies
|
Intel® Xeon® trains Graph Neural Network models in record time
07-20-2023
The 4th gen Intel® Xeon® Scalable Processor, formerly codenamed Sapphire Rapids is a balanced platfo...
1
Kudos
0
Replies
|
Intel Labs Research on Intent-Driven Orchestration Leads to Simplifying Cloud and Edge Deployments
07-20-2023
Using research from Intel Labs, Intel has introduced the open-source Intent-Driven Orchestration (ID...
0
Kudos
0
Replies
|
JUMP and Intel Labs Success Story: UCSD Professor Leads HD Computing Research Efforts
07-18-2023
For the past five years at the JUMP CRISP Center at UCSD, Professor Tajana Simunic Rosing has led hy...
0
Kudos
0
Replies
|
Can we Improve Early-Exit Transformers? Novel Adaptive Inference Method Presented at ACL 2023
07-13-2023
Intel Labs and the Hebrew University of Jerusalem present SWEET, an adaptive Inference method for te...
1
Kudos
0
Replies
|
Intel’s Leading Contributions at SIGMOD’s Data Management on New Hardware (DaMoN) Workshop
06-30-2023
The 19th International Workshop on Data Management on New Hardware (DaMoN) was co-located with the 2...
0
Kudos
0
Replies
|
Community support is provided during standard business hours (Monday to Friday 7AM - 5PM PST). Other contact methods are available here.
Intel does not verify all solutions, including but not limited to any file transfers that may appear in this community. Accordingly, Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade.
For more complete information about compiler optimizations, see our Optimization Notice.