Success! Subscription added.
Success! Subscription removed.
Sorry, you must verify to complete this action. Please click the verification link in your email. You may re-send via your profile.
Multi-node deployments using Intel® AI for Enterprise RAG
08-21-2025
As enterprises scale generative AI across diverse infrastructures, Intel® AI for Enterprise RAG solu...
5
Kudos
1
Comments
|
Building AI With Empathy: Sorenson’s Mission for Accessibility
08-22-2025
For Sorenson Senior Director of AI Mariam Rahmani, the future of AI isn’t about building the flashie...
0
Kudos
0
Comments
|
Curious Case of Chain of Thought: Improving CoT Efficiency via Training-Free Steerable Reasoning
08-06-2025
Researchers from the University of Texas at Austin and Intel Labs investigated chain-of-thought reas...
0
Kudos
0
Comments
|
AI’s Next Frontier: Human Collaboration, Data Strategy, and Scale
08-05-2025
Ramtin Davanlou, CTO of the Accenture and Intel Partnership, explores what it really takes for enter...
0
Kudos
0
Comments
|
Intel Labs Works with Hugging Face to Deploy Tools for Enhanced LLM Efficiency
08-05-2025
Large Language Models are revolutionizing AI applications; however, slow inference speeds continue t...
0
Kudos
0
Comments
|
Efficient PDF Summarization with CrewAI and Intel® XPU Optimization
07-29-2025
In this blog, we demonstrate how to build and run a PDF Summarizer Agent using Intel® XPU-optimized ...
0
Kudos
0
Comments
|
Intel Labs Open Sources Adversarial Image Injection to Evaluate Risks in Computer-Use AI Agents
07-28-2025
Adversarial examples can force computer-use artificial intelligence (AI) agents to execute arbitrary...
0
Kudos
0
Comments
|
Optimizing LLM Inference on Intel® Gaudi® Accelerators with llm-d Decoupling
07-28-2025
Discover how Intel® Gaudi® accelerators and the llm-d stack improve large language model inference b...
0
Kudos
0
Comments
|
Robots Meet Humans: Intel Labs Extends Robotics Safety to Cover 3D Environments
07-21-2025
Intel Labs researchers have developed a new set of safety concepts for mobile and stationary robots ...
0
Kudos
0
Comments
|
Intel and Elementary Partner to Deliver Self-Learning Quality Inspection for Manufacturing
07-21-2025
Manufacturing environments demand computing solutions that operate continuously under challenging co...
1
Kudos
0
Comments
|
Unleashing the Power: Why Intel Workstations Are Essential for Modern Business Success
07-21-2025
Unlock AI's potential with Intel workstations—essential for modern business success and productivity...
0
Kudos
0
Comments
|
Optimizing AI Inputs on the Web: Raidu’s Readability Engine Built with Intel® Liftoff
07-20-2025
Raidu built the first LLM Readability Engine. Intel® Liftoff for AI Startups program helped scale it...
0
Kudos
0
Comments
|
Bringing AI Back to the Device: Real-World Transformer Models on Intel® AI PCs
07-20-2025
Intel and Fluid Inference optimized transformer models to run locally on Intel AI PCs, enabling priv...
0
Kudos
0
Comments
|
Building a Sovereign and Future-Proof Foundation with TrustGraph
07-20-2025
TrustGraph, an Intel Liftoff startup, is redefining enterprise AI with open-source transparency and ...
2
Kudos
0
Comments
|
LAIbel by Envisionairy: A Smarter Way to Label Images for AI Training
07-20-2025
lAIbel is an open-source image labeling platform that works in any browser locally or in the cloud. ...
0
Kudos
0
Comments
|
Fine-Tuning DeepSeek-R1-Distill-Qwen-1.5B Reasoning Model on Intel Max Series GPUs
07-20-2025
In this article, we focus on fine-tuning the DeepSeek-R1-Distill-Qwen-1.5B Reasoning Model to improv...
0
Kudos
0
Comments
|
Building Efficient Agentic RAG System with SmolAgents and Intel GPU Acceleration
07-20-2025
An in-depth look at building lightweight, tool-augmented AI agents using hybrid retrieval and local ...
1
Kudos
0
Comments
|
Securing AI Beyond Shadow Practices: Insights from the Intel® Liftoff Startup Ecosystem
07-17-2025
Shadow AI is rising fast. Intel® Liftoff startups are building secure, scalable tools to protect dat...
0
Kudos
0
Comments
|
Does DeepSeek* Solve the Small Scale Model Performance Puzzle?
02-05-2025
Learn how the DeepSeek-R1 distilled reasoning model performs and see how it works on Intel hardware.
9
Kudos
2
Comments
|
Tackling Network Security: AI Agents at the Edge with Red Hat AI on Intel® Processors and Graphics
07-15-2025
Executive Summary: The cybersecurity landscape is evolving rapidly, with organizations facing increa...
0
Kudos
0
Comments
|
Intel Labs Presents Latest Machine Learning Research Among Eight Papers at ICML 2025
07-14-2025
Intel Labs is excited to present six works at this year's ICML conference in Vancouver Canada, inclu...
0
Kudos
0
Comments
|
Intel Labs Researcher Souvik Kundu Receives DAC Under-40 Innovators Award for Impactful AI Research
07-10-2025
Souvik Kundu is a Staff Research Scientist at Intel Labs, leading scalable and efficient AI research...
2
Kudos
0
Comments
|
How Startups Can Benefit from Corporates: Learnings from Intel® Liftoff for AI Startups
07-09-2025
Wondering if your AI startup should team up with a big tech company? Here’s what 350+ founders learn...
0
Kudos
0
Comments
|
Mamba-Shedder: Intel Labs Explores Efficient Compression of Selective Structured State Space Models
07-08-2025
Utilizing block pruning techniques, Intel Labs researchers developed the Mamba-Shedder solution to r...
1
Kudos
0
Comments
|
Deploying Scalable Enterprise RAG on Kubernetes with Ansible Automation
07-07-2025
Generative AI is changing how businesses work, and Retrieval-Augmented Generation (RAG) is one of th...
5
Kudos
0
Comments
|
Community support is provided Monday to Friday. Other contact methods are available here.
Intel does not verify all solutions, including but not limited to any file transfers that may appear in this community. Accordingly, Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade.
For more complete information about compiler optimizations, see our Optimization Notice.