Mobile and Desktop Processors
Intel® Core™ processors, Intel Atom® processors, tools, and utilities
16758 Discussions

Low NPU utilisation/performance with OpenVINO on Core Ultra 5 226V

gr6
Beginner
783 Views

Hi,

 

I'm running some OpenVINO code on a laptop with a Core Ultra 5 226V, and finding that when the target device for OpenVINO is "NPU", I get really poor performance in terms of latency. I have run the same code on other Intel CPUs with integrated NPU and had much more success. I have updated to the latest NPU driver. Do you have any suggestions as to what I might try to debug this issue please? Are there any known issues with this particular NPU and OpenVINO?

 

Many thanks!

0 Kudos
2 Replies
jbruceyu
New Contributor I
722 Views

hi,

there are no widely documented compatibility issues between the Intel® NPU on the Core Ultra 5 226V and OpenVINO,,, however low NPU utilization can occur due to several contributing factors i put forward ensuring that you are using the most recent versions of the OpenVINO toolkit (2024.x or newer) the Intel® AI Acceleration Runtime for your operating system, and the appropriate NPU driver (note for Windows, this would be the Movidius driver included in the DCH package) a version mismatch between OpenVINO and its supporting runtime components can hinder optimal NPU performance. confirm that your model is compatible and correctly converted for NPU execution. take note>Not all model architectures are fully optimized for the NPU, and some layers or operations may still fall back to the CPU if unsupported.... ensure proper quantization (e.g., INT8) during model conversion using tools like mo.py or openvino.convert_model. You can use benchmark_app to explicitly run the model with the -d NPU flag and examine the output logs for any fallback messages or partial execution warnings. On Windows systems, you may also want to monitor NPU usage through the Task Manager's Performance tab or by utilizing Intel's Telemetry SDK tools to validate that inference workloads are being dispatched to the NPU. Alternatively, trying the MYRIAD device target in OpenVINO may yield better routing results, especially on systems with Movidius hardware support.

Core Ultra 5 226V features Intel’s first-generation integrated NPU, capable of accelerating many AI tasks, though performance varies with model complexity, layer support, and precision. For latency-sensitive or complex workloads, CPU or GPU may perform better depending on optimization.

 

hope it helps a lot...

gr6
Beginner
661 Views

Many thanks for this! It was very useful to run benchmark_app with the -d NPU flag as you suggested. This yields good performance and utilization (as viewed in Task Manager Performance tab) which I think suggests that my driver, model and OpenVINO package are all fine.

0 Kudos
Reply