Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Cannot run Stable Diffusion model on NPU

Shravanthi
Beginner
7,700 Views

Hi,

 

We are trying to run stable diffusion model from optimum-intel openvino on NPU, we are getting below error. Could anyone please help to resolve this error 

 

Error:

(openvino_archive_env) C:\Users\tm_qc\Qualcomm\openvino_notebooks\notebooks>python .\stable_diffusion_intel_inferencing.py --model_version '2' --device NPU --batch_size 1 --prompt "red car in snowy forest, epic vista, beautiful landscape, 4k, 8k"
INFO:nncf:NNCF initialized successfully. Supported frameworks detected: torch, tensorflow, onnx, openvino
Fetching 13 files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 13/13 [00:00<?, ?it/s]
C:\Users\tm_qc\Qualcomm\openvino_archive_env\lib\site-packages\transformers\models\clip\feature_extraction_clip.py:28: FutureWarning: The class CLIPFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please use CLIPImageProcessor instead.
warnings.warn(
Compiling the vae_decoder to NPU ...
error: TilingStrategyAssignment Pass failed : Failed to tile VPU.MVN at 'loc(fused["MVN_1268523", "t_MVN"])'
An error occurred: Exception from src\inference\src\core.cpp:102:
[ GENERAL_ERROR ] Exception from src\vpux_plugin\src\plugin.cpp:432:
LevelZeroCompilerInDriver: Failed to compile network. Error code: 2013265924. Compilation failed
Failed to create executable

 

Thanks,

Shravanthi J

0 Kudos
4 Replies
Peh_Intel
Moderator
7,664 Views

Hi Shravanthi J,

 

Please check that you are running on Windows 11 (64-bit), Intel® NPU driver is being installed and detected and NPU plugin (openvino_intel_npu_plugin.dll) is available.

 

·      The Intel® NPU driver for Windows is available through Windows Update but it may also be installed manually by downloading the NPU driver package and following the Windows driver installation guide.

 

·      If a driver has already been installed, you should be able to find ‘Intel(R) NPU Accelerator’ in Windows Device Manager. If you cannot find such a device, the NPU is most likely listed in “Other devices” as “Multimedia Video Controller.”

 

·      NPU plugin is currently available only with the Archive distribution of OpenVINO™ (https://storage.openvinotoolkit.org/repositories/openvino/packages/2023.3/windows/)

 

Next, please validate the NPU installation by running Hello Classification Python Sample with googlenet-v1 model. I attached googlenet-v1 model (FP32) for your ease.

 

Lastly, please share your demo script on running stable diffusion model (stable_diffusion_intel_inferencing.py) or point to us where you are referring to for duplication purpose.

 

 

Regards,

Peh

 

0 Kudos
Shravanthi
Beginner
7,576 Views

Hi,

I am using the system Intel(R) Core(TM) Ultra 7 155H and openvino version 2023.3.0 (archive for NPU), we are able to run InceptionV4 on NPU, below are the screenshots 

Shravanthi_2-1707725044092.png

Shravanthi_1-1707724703651.png

I have attached stable diffusion script, Please tell me if we have some work around to run gen AI models on NPU

 

Thanks,

Shravanthi J

 

0 Kudos
Aznie_Intel
Moderator
7,518 Views

Hi Shravanthi J,

 

You may refer to our Generative AI Optimization and Deployment for running Generative AI Models using Native OpenVINO APIs. There are some examples of using Optimum-Intel for model conversion and inference.

 

However, please note the NPU support in OpenVINO is still under active development and may offer a limited set of supported OpenVINO features. NPU device is currently supported by AUTO and MULTI inference modes (HETERO execution is partially supported, for certain models). You may check more information for NPU plugin features here.

 

 

Regards,

Aznie


0 Kudos
Aznie_Intel
Moderator
7,365 Views

Hi Shravanthi J,


This thread will no longer be monitored since we have provided a information. If you need any additional information from Intel, please submit a new question.



Regards,

Aznie


0 Kudos
Reply