Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Compile BiseNetv2 model for NPU

Maick
Beginner
426 Views

Hello,

 

I try to use the benchmark_app on a BiseNetv2 model and on NPU device, but the compilation process never ends.

 

If I choose CPU or NPU every think works fine. I also try with a mobilenet model and it works for all devices.

 

It seems related to this model on this device...

 

How could I get more information why the compilation failed ?

 

Here are my configuration: 

System : Intel Core Ultra7 155H

Operating System : Windows 11 (I also try on Ubuntu 22.04)

OpenVino 2024.5.0

NPU driver : 2024.5.0

 

Thanks for your answer.

0 Kudos
7 Replies
Aznie_Intel
Moderator
335 Views

Hi Maick,

 

Thanks for reaching out. Can you reconfirm again which plugin is working fine with the BiseNetv2 model? For further checking, you may share the model files (.xml and .bin).

 

 

Regards,

Aznie


0 Kudos
Maick
Beginner
315 Views

Hi Azine,

 

Thanks for your answer.

 

Sorry for the typo, it works fine with CPU and GPU.

 

I attached the .xml and .bin of the model.

 

Regards,

 

Maïck

0 Kudos
Aznie_Intel
Moderator
229 Views

Hi Maick,

 

Thanks for the model files. We are checking this with the engineering team and will get back to you once the information is available.

 

 

Regards,

Aznie


0 Kudos
Witold_Intel
Employee
177 Views

Hi, Maick,


Could you please share your NPU driver version with us? OpenVINO developers have reproduced the case without any compilation or inference problems, neither for fp16 nor for quantized (fp16-int8) model version. So our recommendation would be to upgrade to the latest available driver version ( https://github.com/intel/linux-npu-driver/releases/tag/v1.10.0 ). By the way, is your setup MTL or LNL?


0 Kudos
Maick
Beginner
170 Views
Hi,
 
Thanks for your answer !
 
Here is what I just tried : 
I still have the same behavior,  the script is blocked on step 7, it cannot build the model.
 
I don't understand what you mean by MTL or LNL ?
0 Kudos
Witold_Intel
Employee
103 Views

Thank you for your feedback. MTL is short for Meteor Lake and LNL for Lunar Lake chip family.


0 Kudos
Witold_Intel
Employee
79 Views

Please bear with us. I relayed your results to the developers but there has been no response yet.


0 Kudos
Reply