The center of gravity for automation is shifting.
What began as computer vision running simple detection and classification tasks is giving way to a new generation of physical and agentic AI and a lot of this is happening at the edge. These are systems that reason over multimodal data from every kind of device, camera, time-series and adapt in real time to automate critical operations far outside of the cloud.
This shift isn’t theoretical. It’s no longer hype. It’s no less than the promise of IT-OT convergence coming alive in factories, robotic systems, hospitals, retail stores, and city intersections.
That’s why at Embedded World 2026 Intel is meeting the moment—with new silicon, open software, to continue driving real-world performance. Just as we always have.
This Moment Was Decades in the Making.
The edge is always evolving. What started with fixed-function embedded designs powering industrial controllers forty years ago, is driving toward edge-first operational autonomy informed by multimodal data. Sensors—for temperature, pressure and flow—now work alongside smart cameras, instigating advances in computer vision for tasks like defect detection, quality management, and healthcare scanning. And of course, with advances in AI, more change is coming, faster than ever before.
At each step of journey, Intel has been there. Working side-by-side with customers over the last four decades of edge technology, we have worked with our partners to support more than 100,000 deployments. In the last 10 years alone, we shipped over 250 million x86 processors for the edge; and made more than 75,000 contributions to the Linux kernel, more than the combined contributions of AMD, Qualcomm, and NVIDIA.
For Intel the Edge isn’t a new space. We built it with the ecosystem.
Now, we have another wave of evolution at the edge, with Vision Language Models (VLMs) and Vision Language Action Models (VLAs) that combine computer vision with intelligence— advanced generative, physical and agentic AI—that make greater automation not only possible, but practical. And most importantly these new models have much more context and understanding of the scene and what is happening in the environment – going beyond object recognition in a video. They can search, compare, contract, respond and act.
They don’t break when operating realities challenge laboratory assumptions.
It’s a game-changing breakthrough that enables local automation, decoupled from cloud dependencies, with industrial-grade precision and reliability.
Finally, the promise of the edge is aligning with the technological reality. But scaling this new wave of AI innovation will take more than raw acceleration.
Raw Throughput Won't Keep a Heart Monitor on Time.
The AI conversation often fixates on TOPS and benchmark throughput. At the edge, that only tells part of the story.
Edge environments are constrained—by power, by physical space, by cost, by temperature extremes. You can’t throw racks of servers at the problem. You have to run AI alongside compute, media, graphics, and real-time control loops. And you have to do it within the existing footprint of a hospital, a factory floor, or a train station.
What physical and agentic AI at the edge actually demands is efficient inferencing and acceleration, multimodal understanding, deterministic performance, real-time control, functional safety, and continuous operation across rugged, distributed environments. That’s without guaranteed cloud connectivity. Plus, it must include the explainability, predictability and sovereignty that regulated industries require for risk-based AI deployments.
Real-time and deterministic performance can matter more than raw acceleration, especially in certain healthcare and manufacturing use cases like patient monitoring and industrial controls. Here, precision isn’t a nice-to-have. It’s the foundation of every safety-critical deployment at the edge.
Intel® Core™ Series 2 Strengthens Precision at the Edge.
At Embedded World 2026, we’re expanding our portfolio with the Intel® Core™ Series 2 processor with P-cores purpose-built for edge deployments that demand precision, longevity, and platform-wide consistency.
Intel® Time Coordinated Computing (TCC) and Time Sensitive Networking (TSN) deliver the time-aware, predictable execution. Combined with gen-over-gen performance improvement, environmental hardening, 10-year availability, and backward compatibility--you can run AI workloads with, or without a paired discrete GPU, with consistency, precision, and efficiency. When compared to AMD's 9700X at equivalent power, the Intel® Core™ Series 2 delivers up to 2.5x more deterministic scheduling and up to 3.8x better predictable performance under load. And, thanks to software co-optimization, 4.4x lower max PCIe latency.
Of course, the real test is what our customers can do with it.
Neurocle reports an average 1.4x reduction in inference latency for deep learning inspection models, resulting in more responsive defect detection on manufacturing lines. Saimos is seeing up to a 2.3x gain in thread-per-channel efficiency, enabling richer analytics and more cameras on the same hardware budget. Codesys is achieving about a 1.6x increase in virtual PLC density, directly reducing cabinet size, wiring complexity, and hardware cost for industrial designs.
As we can see, one-size-fits-all doesn’t work when every edge is different. Likewise, physical and agentic AI workloads have their own requirements where acceleration is needed, but in a low power envelope.
Intel® Core™ Ultra Series 3 Brings AI and Precision Together in a Single SoC.
Earlier this year, we introduced integrated acceleration for AI workloads with the Intel® Core™ Ultra Series 3 for Edge. It’s our first processor that delivers up to 180 TOPS of integrated AI acceleration with real-time and deterministic capabilities in a single SoC. With up to 16 cores, a built-in NPU for low-power AI inference, and up to 12 Xe GPU cores for high-throughput AI and video analytics, it’s designed for building power-efficient solutions that bring together computer vision, generative AI, agentic AI and Physical AI capabilities for intelligence at the edge
This is critical for running VLM and VLA workloads in physical and agentic AI uses case in real-world conditions, where real-time precision, durability, inferencing efficiency and model-tuning are needed: like robotics, defect detection, infrastructure monitoring, patient monitoring, loss prevention, and immersive customer service experiences.
The Intel® Core™ Ultra Series 3 for Edge brings high-performance integrated AI in places where it’s never been possible before.
Intel also shows consistent performance leadership across CPU, GPU, and NPU, not just in a single engine. In fact, Intel’s built-in GPU delivers 9x the performance of the AMD HX 370 in a head-to-head comparison.
But what excites me most is the TCO story. The cost to deploy an AI solution is key. The integrated AI acceleration in Intel® Core™ Ultra processors (Series 3) let customers displace higher-cost, higher-power discrete GPUs—while reducing system complexity, simplifying thermal design, and delivering the kind of reliability edge customers have come to expect. In real deployments across industries, we’ve seen 39 to 67 percent TCO savings compared to alternative solutions.
That can be the difference between deploying at scale and having a proof of concept that never leaves the lab.
Of course, our customers provide the strongest KPIs
- In healthcare, Nanox is now getting insights from imaging and diagnostics almost instantly — right at the point of care. That means clinicians can act faster, and because everything stays on-device, patient data remains private and protected.
- For humanoid robots, Circulus is seeing smoother motion, better scene understanding, and far more natural interactions. It just makes robots feel more responsive and more capable in real-world environments.
- For quick service devices and LLM-driven experiences, Sensory AI is delivering answers faster and more reliably, all while using far less power. So, you get snappier, more consistent responses without needing a bulky accelerator.
- And in smart cities, ISS is improving how quickly operators can react to what’s happening across intersections and public spaces, all while reducing the amount of hardware they need in the field.
Open Software, Open Ecosystem, Lower Risk.
But silicon alone doesn’t guarantee a successful edge deployment.
Efficiently building, deploying, running and managing an edge solution at scale requires open systems, open software and a tested ecosystem. Otherwise, you’re building from scratch, or relying on a single vendor with a proprietary solution—all of which entails significant cost and likely supply chain bottlenecks.
Software and ecosystem cannot be separated from success at the edge.
That’s why Intel launched our AI Edge Systems last year. These are recommended, best-known system configurations that eliminate the guesswork for how much AI to add. Benchmarked, sized and verified for a range of form factors and use cases at the edge, AI Systems from leading ODMs and OEMs enable them to deliver qualified commercial solutions and characterize their AI performance. And we also launched Edge AI Suites—reference applications, sample code, tools, libraries and benchmarks for manufacturing, retail, robotics, metro for smart cities and education for smart classrooms. All to help ISVs, system integrators and solution builders jumpstart solution development, without reinventing the wheel every time.
Now, we are announcing a preview version of the new Edge AI Suite for Health & Life Sciences. This will be our sixth suite and, like the others, includes validated reference workloads and benchmarking tools to help health solution builders bring multimodal AI capabilities to patient monitoring use cases. The new Health & Life Sciences AI Suite will be available later this quarter, but you can find the preview here on GitHub now.
The Edge AI Suites is one part of our open edge approach, alongside Intel® AI Edge Systems and Open Edge Platform software, which we offer to make it easier for the entire value chain to deploy AI at the edge and really get the innovation flywheel going much faster.
With over 4,000 integrators and ISVs in our global ecosystem, this isn’t a closed platform play. It’s an open foundation that encourages innovation across the industry.
The Edge is Advancing
The future of the Edge is no longer on the horizon. It’s happening now, in wave after wave of AI innovation. In fact, according to a recent IDC survey, 84.6% of organizations are either using or planning to use generative AI in edge or hybrid environments (IDC's 2025 AI Infrastructure Evolution Survey, n=566). IDC also predicts the market for AI processors and accelerators supporting edge workloads to reach $64B by 2030, reflecting a five-year CAGR of 16.1%.
The opportunity isn’t just real. It’s growing fast.
What I’ve seen over decades in this space is that the companies that succeed pick the right technologies, the right partners, and the right approach. They clearly articulate the business outcome and the ROI to their leadership—how they’re going to make money, and how they’re going to save money. They invest in ecosystems that give them flexibility and help them avoid lock-in. They deploy on platforms that are proven, open, and built for the realities of edge environments.
Intel’s portfolio is well positioned for this growth. Decades of experience and expertise. Integrated AI acceleration that delivers real-world performance and TCO. An open ecosystem of partners already deploying these solutions in the real world.
Whether you’re building robots, managing a factory, running a hospital, or transforming a city’s transportation system—we’re ready to partner with you on this journey. The is how AI moves the world. And this is just the beginning.
For notices, disclaimers, and details about certain performance claims, visit www.intel.com/PerformanceIndex
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.