Success! Subscription added.
Success! Subscription removed.
Sorry, you must verify to complete this action. Please click the verification link in your email. You may re-send via your profile.
Multi-node deployments using Intel® AI for Enterprise RAG
08-21-2025
As enterprises scale generative AI across diverse infrastructures, Intel® AI for Enterprise RAG solu...
5
Kudos
1
Comments
|
Efficient PDF Summarization with CrewAI and Intel® XPU Optimization
07-29-2025
In this blog, we demonstrate how to build and run a PDF Summarizer Agent using Intel® XPU-optimized ...
0
Kudos
0
Comments
|
Free Multimodal RAG Course: Prediction Guard Partners with Intel® Labs and DeepLearning.AI
10-11-2024
Ready to take your AI skills to the next level? PredictionGuard, in partnership with Intel® Labs and...
0
Kudos
1
Comments
|
Free Multimodal RAG Course from Intel Labs Available on DeepLearning.AI
10-01-2024
Interested in learning more about Multimodal RAG? Want to quickly develop the expertise to create A...
2
Kudos
1
Comments
|
Unleashing the Power: Why Intel Workstations Are Essential for Modern Business Success
07-21-2025
Unlock AI's potential with Intel workstations—essential for modern business success and productivity...
0
Kudos
0
Comments
|
Optimizing AI Inputs on the Web: Raidu’s Readability Engine Built with Intel® Liftoff
07-20-2025
Raidu built the first LLM Readability Engine. Intel® Liftoff for AI Startups program helped scale it...
0
Kudos
0
Comments
|
Bringing AI Back to the Device: Real-World Transformer Models on Intel® AI PCs
07-20-2025
Intel and Fluid Inference optimized transformer models to run locally on Intel AI PCs, enabling priv...
0
Kudos
0
Comments
|
Building a Sovereign and Future-Proof Foundation with TrustGraph
07-20-2025
TrustGraph, an Intel Liftoff startup, is redefining enterprise AI with open-source transparency and ...
2
Kudos
0
Comments
|
LAIbel by Envisionairy: A Smarter Way to Label Images for AI Training
07-20-2025
lAIbel is an open-source image labeling platform that works in any browser locally or in the cloud. ...
0
Kudos
0
Comments
|
Fine-Tuning DeepSeek-R1-Distill-Qwen-1.5B Reasoning Model on Intel Max Series GPUs
07-20-2025
In this article, we focus on fine-tuning the DeepSeek-R1-Distill-Qwen-1.5B Reasoning Model to improv...
0
Kudos
0
Comments
|
Building Efficient Agentic RAG System with SmolAgents and Intel GPU Acceleration
07-20-2025
An in-depth look at building lightweight, tool-augmented AI agents using hybrid retrieval and local ...
1
Kudos
0
Comments
|
Securing AI Beyond Shadow Practices: Insights from the Intel® Liftoff Startup Ecosystem
07-17-2025
Shadow AI is rising fast. Intel® Liftoff startups are building secure, scalable tools to protect dat...
0
Kudos
0
Comments
|
Does DeepSeek* Solve the Small Scale Model Performance Puzzle?
02-05-2025
Learn how the DeepSeek-R1 distilled reasoning model performs and see how it works on Intel hardware.
9
Kudos
2
Comments
|
How Startups Can Benefit from Corporates: Learnings from Intel® Liftoff for AI Startups
07-09-2025
Wondering if your AI startup should team up with a big tech company? Here’s what 350+ founders learn...
0
Kudos
0
Comments
|
Deploying Scalable Enterprise RAG on Kubernetes with Ansible Automation
07-07-2025
Generative AI is changing how businesses work, and Retrieval-Augmented Generation (RAG) is one of th...
5
Kudos
0
Comments
|
Building Agentic AI Foundations: How Intel® Liftoff Startups Are Preparing for the Next GPT Moment
06-27-2025
Agentic AI is here: See how Intel® Liftoff startups are building smarter, more autonomous systems th...
0
Kudos
0
Comments
|
Deploying Llama 4 Scout and Maverick Models on Intel® Gaudi® 3 with vLLM
06-24-2025
Learn how to deploy Llama 4 Scout and Maverick models on Intel® Gaudi® 3 using vLLM for efficient, h...
0
Kudos
0
Comments
|
Running Llama3.3-70B on Intel® Gaudi® 2 with vLLM: A Step-by-Step Inference Guide
06-24-2025
Run Llama 3.3-70B efficiently on Intel® Gaudi® 2 using vLLM. Learn setup, configuration, and perform...
0
Kudos
0
Comments
|
Accelerating Llama 3.3-70B Inference on Intel® Gaudi® 2 via Hugging Face Text Generation Inference
06-23-2025
Learn how to deploy Llama 3.3-70B on Intel® Gaudi® 2 AI accelerators using Hugging Face TGI, with pr...
0
Kudos
0
Comments
|
Exploring Vision-Language Models (VLMs) with Text Generation Inference on Intel® Data Center GPU Max
06-23-2025
Supercharge VLM deployment with TGI on Intel XPUs. This guide shows how to set up, optimize, and ser...
0
Kudos
0
Comments
|
Building Agentic Systems for Preventative Healthcare with AutoGen
06-04-2025
This blog demonstrates the preventative healthcare outreach agentic system built using AutoGen.
0
Kudos
0
Comments
|
GenAI-driven Music Composer Chorus.AI: Developer Spotlight
05-28-2025
An Intel® Student Ambassador’s GenAI solution for music composition
0
Kudos
0
Comments
|
AI Hacks for Marketing and Sales
05-27-2025
5 AI hacks every founder should know. Learn how to personalize outreach, sharpen pitches, validate p...
0
Kudos
0
Comments
|
What Startups Built This Time at Intel® Liftoff Days
05-27-2025
The latest Intel® Liftoff Days was a sprint full of progress, teamwork, and fresh ideas. Here’s what...
1
Kudos
0
Comments
|
Low-Power AI: Driving the Next Era of Efficient Intelligence
05-23-2025
Falcons.AI’s 4MB neural network mimics the brain to cut power use by 10x, delivering accurate image ...
0
Kudos
0
Comments
|
Community support is provided Monday to Friday. Other contact methods are available here.
Intel does not verify all solutions, including but not limited to any file transfers that may appear in this community. Accordingly, Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade.
For more complete information about compiler optimizations, see our Optimization Notice.