Edge & 5G
Gain crucial understandings of Edge software and 5G concepts with Intel® industry experts
MikeMasci
Employee
0 1 1,695

Edge AI is entering its most consequential phase.

What began as isolated computer vision tasks like detection and classification is giving way to something far more ambitious: AI systems that reason over multimodal data, adapt in real time, and take autonomous action in the physical world.

For those of us who have spent decades building compute into places it has never been before, this is the moment we have been working toward. It is also a moment that demands a practical approach to silicon.

In other words, what do customers actually need to build, run, manage and scale edge deployments safely, securely and efficiently?

I’ll tell you this—it’s more than just adding another card to a system.

Certainly, customers need to deliver on the next wave of generative AI that enables contextual understanding in vision use cases--Vision Language Models (VLMs) and Vision Language Action Models (VLAs)—where GenAI can correct for unanticipated variables in the target environment. This means inventory under different lighting conditions in store isles, workers wearing orange safety vests instead of yellow, or a fallen patient in a healthcare facility, can be detected despite limitations of the initial model.   

And to deploy next gen use cases at scale requires a portfolio that’s built for the diversity and constraints of the edge. It requires deterministic performance for safety-critical systems in manufacturing and healthcare. It requires integrated AI acceleration that fits within existing power and thermal envelopes for robotics, smart cities and more. And, of course, it all needs to be co-optimized on open software and supported by a proven ecosystem.

At Embedded World 2026, Intel is continuing our commitment to delivering real-world performance with two new additions to our edge silicon portfolio, each built to address a distinct set of real-world requirements.

The Edge Demands More Than Raw Acceleration

The conversation about AI at the edge used to center on a single question: how many TOPS can you deliver?

That metric still matters, but it misses the real challenge. Physical and agentic AI systems do not simply classify images or detect anomalies. They reason across video, audio, text, and sensor data simultaneously. They make split-second decisions that control physical outcomes: a robotic arm adjusting its grip, a traffic system rerouting flow, a retail environment personalizing the experience in the aisle. These systems require multimodal understanding, deterministic performance, real-time control, and functional safety across distributed environments that do not tolerate downtime.

The environments where edge AI operates are highly constrained in terms of physical space and power. A smart city intersection box cannot accommodate a server rack. A collaborative robot on a factory floor runs on batteries. An in-store analytics system shares a power circuit with refrigeration. The hardware that runs physical AI must fit the operational reality, not the other way around. That is the fundamental constraint that separates edge from cloud, and it is the constraint that has historically held back the most ambitious AI deployments.

That’s why Intel is focused on delivering real-world performance in our edge portfolio.

 

Intel® Core™ Series 2 Processors Deliver Real-Time Precision for Industrial and Medical Systems

In manufacturing and healthcare, precision is not a feature. It is the foundation of every deployment. ECG monitors track the heart’s electrical signals at exact intervals. If the system samples too early, too late, or gets interrupted by another task, it can miss subtle waveform abnormalities that clinicians rely on. On a factory floor, robotic arms handling assembly, inspection, and packaging must stay perfectly synchronized. Even milliseconds of drift cause defects and misalignment.

These environments need processor performance that is consistent and predictable, not just fast. To meet these demands, Intel is announcing the Intel® Core™ Series 2 processors with P-cores, purpose-built for industrial and edge deployments where deterministic execution and longevity are non-negotiable.

The latest Intel® Core™ processors deliver up to 12 P-cores and up to 1.5x higher multithreaded performance compared to the prior generation, and supports 10-year availability with long term service OS support, as well as Windows Server. Intel ® Time Coordinated Computing (Intel ® TCC) and Time Sensitive Networking (TSN) technologies deliver time-aware, deterministic execution, which is essential for industrial control, medical devices, robotics, and automation. It also remains socket compatible across 12th through 14th generation Intel® Core™ products, so customers can upgrade seamlessly and extend their existing platform investments. 

Real-World Performance

When it comes to performance, we can see where synthetic CPU benchmarks fall short in meeting real-world requirements.  We compared 65W Intel® Core™ processors (Series 2) with P-cores against AMD’s 9700X at the same power level, using idle condition measurements for a fair comparison. Yes, AMD’s higher base frequency gives them higher scores in a synthetic CPU benchmark—but the real time performance tells a very different story. The Intel® Core™ processors (Series 2) delivers up to 2.5× more deterministic scheduling behavior and up to 3.8× better predictable performance under load, as measured by the industry standard benchmark of cyclic test and RTC Testbench. And Intel shows 4.4× lower max PCIe latency, driven by real time tuning that keeps the PCIe to memory path awake and stable. 

Consistent, Low-Latency Performance for Healthcare

This precision and performance support the practical needs in healthcare settings. Low-latency image processing, with or without AI, gives healthcare professionals insights that improve patient care. Uniform P-core architecture provides predictable, sub-millisecond response times to support deterministic latency and helps prevent dropped frames in medical imaging applications.

Specifically for use cases like ultrasound imaging, x-ray imaging, radiology workstations, lab diagnostics and other medical imaging, the Intel® Core™ Series 2 provides:

  • Up to four more P-cores provide headroom for AI and multimodal workflows, delivering sharper, more reliable imaging, faster 3D reconstruction, and AI-assisted diagnostics. ​
  • Intel® DL Boost and the OpenVINO™ toolkit support tools that improve AI-assisted workflows by using the CPU and iGPU to drive more-efficient inference. ​
  • Local processing for demanding workloads eliminates data sovereignty challenges and cloud bandwidth issues, streamlines IEC/FDA submissions, and helps lock down software builds fast. ​

Industrial-Grade Precision

In industrial manufacturing, the Intel® Core™ Series 2 delivers consistent, high-level performance and real-time control for demanding machine vision tasks and other industrial use cases--helping ensure repetitive or constant workloads run reliably with no drop in performance.

When it comes to industrial settings, we’re focused on automation and control.

  • Up to four more P-cores help power demanding AI/ML edge workloads, minimizing dependence on cloud bandwidth and enhancing operational independence.​
  • Intel® TCC and TSN are backed by decades of software co-optimization with our hardware. This tight integration between silicon and software delivers the deterministic, real-time performance required for industrial and medical edge use cases.
  • Long-life availability helps ensure consistent supply for repairs and maintenance and can improve the value realized from long certification cycles.

Edge Infrastructure, Retail, and Smart Cities

Intel® Core™ processors (Series 2) deliver the precision, consistency and efficiency required for edge servers in industrial, retail and harsh environments. Ultra-low latency, streamlined thread management, PCIe 5.0 and DDR5 memory can provide faster, high‑bandwidth data access to multiple GPUs without bottlenecking, for efficiently processing growing AI workloads at scale.

It delivers exceptional performance for high-value retail edge use cases, including frictionless transactions, retail analytics, 4K digital signage with consistent frame rates, on-device AI, and seamless device fleet management. The tight co-optimization of software and platform enables the low latency needed for POS checkout and responsive EFT, loyalty, and inventory calls, even during peak seasonal foot traffic. ​

Real-World Results

Customers are already putting this to work.  Neurocle reports an average 1.4x reduction in inference latency for deep learning inspection models, resulting in more responsive defect detection on manufacturing lines. Saimos is seeing up to a 2.3x gain in thread-per-channel efficiency, enabling richer analytics and more cameras on the same hardware budget. Codesys is achieving about a 1.6x increase in virtual PLC density, directly reducing cabinet size, wiring complexity, and hardware cost for industrial designs.

 

Intel® Core™ Ultra Series 3 Processors Bring Integrated AI Acceleration to the Constrained Edge

Now, let’s look at what happens when you combine the precision and consistency of the Intel® Core™ processors (Series 2) with a breakthrough in power efficient inferencing and AI acceleration, all on a single SOC. Earlier this year at CES ‘26, we announced the latest Intel®  Core™ Ultra processors that deliver real-time, deterministic performance and up to 180 TOPS of built-in AI power to handle physical AI and multimodal workloads behind models like VLMs and VLAs.  

The Intel® Core™ Ultra Series 3 provides up to 16 cores, an integrate GPU with up to 12 Xe cores, and an efficient NPU, along with options for extended temperature, functional safety, and ECC support. And the Intel® Core™ Ultra Series 3 does all of that within the same power envelope—which means you can bring significant AI performance to the edge, in constrained and harsh environments. Exactly where these systems live.   

As you see in the following chart, integrated AI acceleration provides performance leadership where it counts. In fact, Intel’s built-in GPU delivers 9x the performance of the AMD HX 370 in a head-to-head comparison.

CARYNFRITZ_0-1773698797714.png

 

And in real deployments, customers have seen 39 to 67 percent TCO savings by displacing discrete GPUs with Intel’s integrated acceleration.

Customer results reinforce the platform story. In healthcare, Nanox is delivering imaging and diagnostic insights at the point of care with all data remaining on-device. Circulus is achieving smoother motion and better scene understanding in humanoid robots. Sensory AI is powering faster, more reliable LLM-driven experiences at a fraction of the power. And ISS is improving operator response times across smart city intersections while reducing field hardware.

Open Software Turns Silicon into Deployed Solutions

Silicon alone does not solve the deployment challenge. Intel tracks more than a hundred different edge device categories, and the fragmentation across industries, environments, and legacy infrastructure is why most AI pilots fail to reach production.

That’s why Intel launched our AI Edge Systems last year. These are recommended, best-known system configurations that eliminate the guesswork for how much AI to add. Benchmarked, sized and verified for a range of form factors and use cases at the edge, AI Systems from leading ODMs and OEMs enable them to deliver qualified commercial solutions and characterize their AI performance.

And we also launched Edge AI Suites: starter kits for AI with optimized models, libraries, sample applications, and dimensioned benchmarks for specific verticals. Six suites now cover manufacturing, smart cities (Metro), retail, robotics, education, and the newest addition announced at Embedded World: Health & Life Sciences. The Health & Life Sciences AI Suite includes validated reference workloads and benchmarking tools for patient monitoring use cases, available later this quarter. You can check out a preview version of the new AI Suite now on GitHub.

Everything is open source, available through GitHub and ecosystem partner sites. The Edge AI Suites are part of Intel’s Open Edge Platform, with OpenVINO at its core for AI optimization across CPU, GPU, and NPU. Programs like Intel® AI Edge Systems pre-validate hardware configurations with partners so customers can deploy with confidence that the system delivers its promised performance. Over 4,000 integrators and ISVs participate in Intel’s global edge ecosystem.

Edge AI Built for the Real World

The processor has to fit the environment, not the other way around. The software has to work across fragmented industries and legacy systems. The economics have to make sense at scale, not just in the lab.

That’s why we’re expanding our edge portfolio to deliver power efficient and precise computing, and integrated AI acceleration, to meet the expanding needs of AI at the edge. The Intel® Core™ Series 2 strengthens the foundation of the industrial edge and any other environment where deterministic performance and long‑life reliability really matter. The Intel® Core™ Ultra Series 3 takes edge AI to a whole new level. And our Health & Life Sciences AI Suite helps get to market faster.  

Intel builds for the real-world challenges of the edge: providing proven computing with decades of reliability, a breakthrough with integrated AI acceleration, lower TCO, and an open ecosystem that gets solutions from prototype to production without the engineering risks.  

Whether the deployment is a factory floor, a hospital, a city intersection, or a robot, Intel has the silicon, the software, and the partners to make edge AI real.

This is how AI moves the world.

____________________________________________________________________________

For notices, disclaimers, and details about certain performance claims, visit www.intel.com/PerformanceIndex

1 Comment
adam_idress
Beginner

Really interesting read. The shift towards running AI directly at the edge instead of relying fully on the cloud makes a lot of sense, especially for things that need real-time responses. It’s cool to see how this is already being used in areas like factories, and smart cities—it feels like things are moving faster than expected.