Success! Subscription added.
Success! Subscription removed.
Sorry, you must verify to complete this action. Please click the verification link in your email. You may re-send via your profile.
Intel® Shows OCI Optical I/O Chiplet Co-packaged with CPU at OFC2024, Targeting Explosive AI Scaling
03-21-2024
At the Optical Fiber Conference in San Diego on March 26-28, 2024, Intel plans to demonstrate our ad...
0
Kudos
0
Comments
|
International Women’s Day: Celebrating Accomplishments of Women at Intel Labs
03-08-2024
There are many outstanding women at Intel Labs, and in honor of International Women’s Day, we would ...
3
Kudos
1
Comments
|
Intel Labs Research Work Receives Spotlight Award at Top AI Conference (ICLR 2024)
02-27-2024
Researchers at Intel Labs, in collaboration with Xiamen University and DJI, have introduced GIM, the...
1
Kudos
0
Comments
|
Intel Senior Fellow Receives ACM Fellowship for Parallel Processing of Data-Intensive Applications
02-05-2024
Intel Senior Fellow Pradeep K. Dubey was named a 2023 ACM Fellow for his lifelong technical contribu...
0
Kudos
0
Comments
|
Using Open MatSci ML Toolkit to Train AI Models for Materials Science on Intel® Xeon® Processors
01-17-2024
Intel Labs and Intel DCAI researchers demonstrated how advanced AI models can be trained on 4th Gene...
0
Kudos
0
Comments
|
Intel Labs’ Top Stories of 2023
01-04-2024
Intel Labs’ top stories of 2023 cover the release of the Intel® Quantum Software Development Kit and...
1
Kudos
0
Comments
|
ULTRA: Foundation Models for Knowledge Graph Reasoning
11-29-2023
Training a single generic model for solving arbitrary datasets is always a dream for ML researchers,...
1
Kudos
1
Comments
|
HoneyBee: Intel Labs and Mila Collaborate on State-of-the-Art Language Model for Materials Science
12-11-2023
Intel Labs and Mila collaborate on HoneyBee, a large language model specialized to materials science...
1
Kudos
0
Comments
|
Intel Labs Presents 31 Papers at NeurIPS 2023
12-06-2023
This year, Intel Labs presents 31 papers at NeurIPS, including 12 at the main conference. Contributi...
0
Kudos
0
Comments
|
Enabling In-Memory Computing for Artificial Intelligence Part 1: The Analog Approach
02-16-2023
In this two-part series, we will explore and evaluate digital and analog computing and highlight the...
4
Kudos
1
Comments
|
A Fresh Take on Neural Network Pruning from the Angle of Graph Theory
11-29-2023
Model pruning is arguably one of the oldest methods of deep neural networks (DNN) model size reducti...
1
Kudos
0
Comments
|
Knowledge Retrieval Takes Center Stage
11-16-2023
GenAI Architecture Shifting from RAG Toward Interpretive Retrieval-Centric Generation (RCG) Models
0
Kudos
0
Comments
|
Accelerating Graph Neural Network Training on Intel CPU through Fused Sampling & Hybrid Partitioning
10-30-2023
Intel Labs and AIA developed a new graph sampling method called “fused sampling” that achieves up to...
10
Kudos
1
Comments
|
Multi-Objective GFlowNet: Intel Labs, Mila and Recursion Collaborate on AI for Scientific Discovery
10-18-2023
Intel Labs, Mila, and Recursion Pharmaceuticals collaborated on Multi-Objective GFlowNets (MOGFNs), ...
0
Kudos
0
Comments
|
Intel Labs Releases Open MatSci ML Toolkit 1.0 for Training AI Models on Materials Science
10-09-2023
Intel Labs recently released the Open MatSci ML Toolkit version 1.0 on August 31, making training of...
2
Kudos
0
Comments
|
Intel Presents Latest Computer Vision Research at ICCV 2023
10-03-2023
Intel presents six computer vision works at he 2023 International Conference on Computer Vision (ICC...
0
Kudos
0
Comments
|
Accelerating Codegen training and inference on Habana Gaudi2
09-06-2023
Optimum Habana makes it easy to achieve fast training and inference of large language models (LLMs)...
1
Kudos
0
Comments
|
FAENet: Intel Labs and Mila Collaborate on Data-Centric AI Model for Materials Property Modeling
09-05-2023
Intel and Mila collaborated on FAENet, a new data-centric model paradigm that improves both modeling...
1
Kudos
0
Comments
|
MatSci-NLP: Intel Labs and Mila Collaborate on Benchmark to Assess Materials Science Language Models
08-23-2023
Intel and Mila collaborate on MatSci-NLP, the first broad benchmark for assessing the capabilities o...
1
Kudos
0
Comments
|
Intel Labs Selected by DARPA H6 to Develop Tactical-Grade Clock with Microsecond Timing Precision
08-21-2023
Intel Labs and collaborators from the University of Pennsylvania, Carnegie Mellon University, and IS...
0
Kudos
0
Comments
|
ProtST: Intel and Mila Collaborate on a New Multi-Modal Protein Language Model
08-16-2023
Intel and Mila collaborated on ProtST, a multi-modal protein language model that enables users to cr...
1
Kudos
0
Comments
|
Survival of the Fittest: Compact Generative AI Models Are the Future for Cost-Effective AI at Scale
07-25-2023
The case for nimble, targeted, retrieval-based models as the best solution for generative AI applica...
2
Kudos
0
Comments
|
Intel Presents Six Papers on Novel AI Research at ICML 2023
07-24-2023
Intel had six papers accepted at the 40th International Conference on Machine Learning (ICML) 2023, ...
1
Kudos
0
Comments
|
Intel Xeon is all you need for AI inference: Performance Leadership on Real World Applications
07-19-2023
Intel is democratizing AI inference by delivering a better price and performance forreal-world use c...
2
Kudos
2
Comments
|
Intel® Xeon® trains Graph Neural Network models in record time
07-20-2023
The 4th gen Intel® Xeon® Scalable Processor, formerly codenamed Sapphire Rapids is a balanced platfo...
1
Kudos
0
Comments
|
Community support is provided during standard business hours (Monday to Friday 7AM - 5PM PST). Other contact methods are available here.
Intel does not verify all solutions, including but not limited to any file transfers that may appear in this community. Accordingly, Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade.
For more complete information about compiler optimizations, see our Optimization Notice.