Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

How to accelerate NPU using OpenVino

Shravanthi
Beginner
4,581 Views

Hi,

We are trying to run some of the models on Intel Core Ultra system NPU and GPU, we observed that when we inferenced the model on GPU, CPU utilizations were high whereas GPU utilizations were very low, also we checked if layers are falling back to the CPU device but for all the layers it shows it was running on GPU. 

Also we are unable to accelerate NPU, when we checked for available devices it shows CPU and GPU only. NPU driver was already installed, Could you please help us on how to accelerate NPUs using OpenVino Intel and how to increase the GPU utilizations

 

Thanks,

Shravanthi J

0 Kudos
1 Solution
Aznie_Intel
Moderator
4,409 Views

Hi Shravanthi J,

 

Can you check whether your OpenVINO includes the openvino_intel_npu_plugin.dll file? If not, please download the latest release file version via OpenVINO Runtime archive file for Windows and try out the NPU plugin again.

 

 

Regards,

Aznie


View solution in original post

0 Kudos
7 Replies
Aznie_Intel
Moderator
4,560 Views

Hi Shravanthi J,

 

Thanks for reaching out. May I know which OpenVINO version and the system you are using? Intel® NPU device requires a proper driver to be installed in the system. Make sure you use the most recent supported driver for your hardware setup. You may refer to these Configurations for Intel® NPU with OpenVINO™.

 

 

 

Regards,

Aznie


0 Kudos
Shravanthi
Beginner
4,521 Views

Hi Aznie,

 

I am using openvino version 2023.3.0 and the system Intel(R) Core(TM) Ultra 7 155H. 

I see that NPU driver already installed

 

Shravanthi_2-1706870311656.png

 

Shravanthi_3-1706870431936.png

 

Thanks

 

 

 

0 Kudos
Aznie_Intel
Moderator
4,485 Views

Hi Shravanthi J,

 

May I know what application you are running? Currently, only the models with static shapes are supported on NPU. When running your application, change the device name to "NPU" and run.

 

Meanwhile, OpenVINO allows for asynchronous execution, enabling concurrent processing of multiple inference requests. This can enhance GPU utilization and improve throughput. You may check these Working with GPUs in OpenVINO to learn how to accelerate inference with GPUs in OpenVINO

 

 

Regards,

Aznie


0 Kudos
Shravanthi
Beginner
4,433 Views

Hi Aznie,

 

When i list the devices it does not show NPU at all. 

 

Shravanthi_0-1707130381020.png

 

Thanks

0 Kudos
Aznie_Intel
Moderator
4,410 Views

Hi Shravanthi J,

 

Can you check whether your OpenVINO includes the openvino_intel_npu_plugin.dll file? If not, please download the latest release file version via OpenVINO Runtime archive file for Windows and try out the NPU plugin again.

 

 

Regards,

Aznie


0 Kudos
Shravanthi
Beginner
4,305 Views

Hi Aznie,

 

Thanks we are able to accelerate NPU now, but when trying to run model with batch size 2 on NPU it fails. The mode is static still it fails for batch 2 or any other batch except 1 and below is the error. 

Shravanthi_0-1707726210710.png

 

Thanks,

Shravanthi J

 

 

 

 

0 Kudos
Aznie_Intel
Moderator
4,257 Views

 

Hi Shravanthi J,

 

Good to hear that you are able to accelerate the NPU now. I noticed you have submitted the same issue in the thread below:

https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Cannot-run-Stable-Diffusion-model-on-NPU/m-p/1571447#M30674

 

I will close this ticket and we will continue providing support on that thread. This thread will no longer be monitored since this issue has been resolved. If you need any additional information from Intel, please submit a new question.

 

 

Regards,

Aznie


0 Kudos
Reply