Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6493 Discussions

Can't see NPU and GPU devices on MSI Prestige Evo AI laptop with Meteor Lake CPU/SOC

Oleg_M_Intel
Employee
5,625 Views

Hi, 

 

I have Ubuntu 22.0.3 LTS with kernels 6.2, 6.5, and 6.6. I have downloaded and installed OpenVino archive with NPU and GPU plugins and I see them inside Runtime libraries. But when I am running sample program hello_query_device I see only CPU and GNA devices. For a reference, I have compiled NPU driver also successfully. Please help.

 

Thanks, 

Oleg

0 Kudos
17 Replies
Hairul_Intel
Moderator
5,601 Views

Hi Oleg,

Thank you for reaching out to us.

 

Please share the following information with us for further investigation regarding this issue:

  • CPU and GPU information.
  • Output results from hello_query_device sample.

 

Furthermore, please have a try using the NPU and GPU device in Hello Classification C++ Sample and share the results with us.

 

 

Regards,

Hairul


0 Kudos
Oleg_M_Intel
Employee
5,589 Views

Hi, Hairul, 

 

Thanks for the prompt reply.

 

System info from Ubuntu 22.0 Gnome is attached  in sysinfo.png file.

 

More details from Ubuntu 23.10 Cinnamon is: 

"

System:
  Kernel: 6.7.0-060700rc7drmtip20231229-generic arch: x86_64 bits: 64 compiler: N/A
    Desktop: Cinnamon v: 5.8.4 tk: GTK v: 3.24.38 dm: LightDM Distro: Ubuntu 23.10 (Mantic Minotaur)
Machine:
  Type: Laptop System: Micro-Star product: Prestige 16 AI Evo B1MG v: REV:1.0
    serial: <superuser required> Chassis: type: 10 serial: <superuser required>
  Mobo: Micro-Star model: MS-15A1 v: REV:1.0 serial: <superuser required> UEFI: American
    Megatrends LLC. v: E15A1IMS.106 date: 11/03/2023
Battery:
  ID-1: BAT1 charge: 97.4 Wh (100.0%) condition: 97.4/97.6 Wh (99.7%) volts: 17.6 min: 15.5
    model: MSI BIF0_9 serial: N/A status: full
CPU:
  Info: 16-core (6-mt/10-st) model: Intel Core Ultra 7 155H bits: 64 type: MST AMCP
    arch: Meteor Lake rev: 4 cache: 24 MiB note: check
  Speed (MHz): avg: 614 high: 1964 min/max: 400/4800:4500:3800:2500 cores: 1: 400 2: 400 3: 400
    4: 400 5: 1964 6: 400 7: 400 8: 400 9: 400 10: 400 11: 400 12: 400 13: 1099 14: 1455 15: 1805
    16: 400 17: 400 18: 400 19: 400 20: 400 21: 400 22: 400 bogomips: 131788
  Flags: avx avx2 ht lm nx pae sse sse2 sse3 sse4_1 sse4_2 ssse3 vmx
Graphics:
  Device-1: Intel Meteor Lake-P [Intel Graphics] vendor: Micro-Star MSI driver: i915 v: kernel
    arch: Gen-14 ports: active: eDP-1 empty: DP-1,DP-2,HDMI-A-1 bus-ID: 0000:00:02.0
    chip-ID: 8086:7d55
  Device-2: Bison FHD Camera driver: uvcvideo type: USB rev: 2.0 speed: 480 Mb/s lanes: 1
    bus-ID: 3-5:3 chip-ID: 5986:1193
  Display: x11 server: X.Org v: 1.21.1.7 driver: X: loaded: modesetting unloaded: fbdev,vesa
    dri: iris gpu: i915 display-ID: :0 screens: 1
  Screen-1: 0 s-res: 3840x2400 s-dpi: 96
  Monitor-1: eDP-1 model: Samsung 0x4191 res: 3840x2400 dpi: 284 diag: 406mm (16")
  API: OpenGL v: 4.6 Mesa 23.2.1-1ubuntu3.1 renderer: Mesa Intel Arc Graphics (MTL)
    direct-render: Yes
Audio:
  Device-1: Intel Meteor Lake-P HD Audio vendor: Micro-Star MSI driver: sof-audio-pci-intel-mtl
    bus-ID: 0000:00:1f.3 chip-ID: 8086:7e28
  API: ALSA v: k6.7.0-060700rc7drmtip20231229-generic status: kernel-api
  Server-1: PipeWire v: 0.3.79 status: active with: 1: pipewire-pulse status: active
    2: wireplumber status: active 3: pipewire-alsa type: plugin
Network:
  Device-1: Intel vendor: Micro-Star MSI driver: e1000e v: kernel port: N/A bus-ID: 0000:00:1f.6
    chip-ID: 8086:550b
  IF: eno1 state: up speed: 1000 Mbps duplex: full mac: <filter>
  Device-2: Intel vendor: Rivet Networks driver: N/A port: N/A bus-ID: 0000:55:00.0
    chip-ID: 8086:272b

 

Results from hello_query_device sample:

"

Output - 

[ INFO ] Build ................................. 2023.2.0-13089-cfd42bd2cb0-HEAD
[ INFO ]
[ INFO ] Available devices:
[ INFO ] CPU
[ INFO ] SUPPORTED_PROPERTIES:
[ INFO ] Immutable: AVAILABLE_DEVICES : ""
[ INFO ] Immutable: RANGE_FOR_ASYNC_INFER_REQUESTS : 1 1 1
[ INFO ] Immutable: RANGE_FOR_STREAMS : 1 22
[ INFO ] Immutable: FULL_DEVICE_NAME : Intel(R) Core(TM) Ultra 7 155H\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00
[ INFO ] Immutable: OPTIMIZATION_CAPABILITIES : FP32 FP16 INT8 BIN EXPORT_IMPORT
[ INFO ] Mutable: NUM_STREAMS : 1
[ INFO ] Mutable: AFFINITY : HYBRID_AWARE
[ INFO ] Mutable: INFERENCE_NUM_THREADS : 0
[ INFO ] Mutable: PERF_COUNT : NO
[ INFO ] Mutable: INFERENCE_PRECISION_HINT : f32
[ INFO ] Mutable: PERFORMANCE_HINT : LATENCY
[ INFO ] Mutable: EXECUTION_MODE_HINT : PERFORMANCE
[ INFO ] Mutable: PERFORMANCE_HINT_NUM_REQUESTS : 0
[ INFO ] Mutable: ENABLE_CPU_PINNING : YES
[ INFO ] Mutable: SCHEDULING_CORE_TYPE : ANY_CORE
[ INFO ] Mutable: ENABLE_HYPER_THREADING : YES
[ INFO ] Mutable: DEVICE_ID : ""
[ INFO ] Mutable: CPU_DENORMALS_OPTIMIZATION : NO
[ INFO ] Mutable: CPU_SPARSE_WEIGHTS_DECOMPRESSION_RATE : 1
[ INFO ]
[ INFO ] GNA
[ INFO ] SUPPORTED_PROPERTIES:
[ INFO ] Immutable: AVAILABLE_DEVICES : GNA_SW
[ INFO ] Immutable: OPTIMAL_NUMBER_OF_INFER_REQUESTS : 1
[ INFO ] Immutable: RANGE_FOR_ASYNC_INFER_REQUESTS : 1 1 1
[ INFO ] Immutable: OPTIMIZATION_CAPABILITIES : INT16 INT8 EXPORT_IMPORT
[ INFO ] Immutable: FULL_DEVICE_NAME : GNA_SW
[ INFO ] Immutable: GNA_LIBRARY_FULL_VERSION : 3.5.0.2116
[ INFO ] Mutable: GNA_DEVICE_MODE : GNA_SW_EXACT
[ INFO ] Mutable: PERFORMANCE_HINT : LATENCY
[ INFO ] Mutable: LOG_LEVEL : LOG_NONE
[ INFO ] Immutable: EXECUTION_DEVICES : GNA
[ INFO ] Mutable: GNA_SCALE_FACTOR_PER_INPUT : ""
[ INFO ] Mutable: GNA_FIRMWARE_MODEL_IMAGE : ""
[ INFO ] Mutable: GNA_HW_EXECUTION_TARGET : UNDEFINED
[ INFO ] Mutable: GNA_HW_COMPILE_TARGET : UNDEFINED
[ INFO ] Mutable: GNA_PWL_DESIGN_ALGORITHM : UNDEFINED
[ INFO ] Mutable: GNA_PWL_MAX_ERROR_PERCENT : 1.000000
[ INFO ] Mutable: INFERENCE_PRECISION_HINT : undefined
[ INFO ] Mutable: EXECUTION_MODE_HINT : ACCURACY
[ INFO ] Mutable: PERFORMANCE_HINT_NUM_REQUESTS : 1
[ INFO ] 
"
 
Results from  Hello Classification C++ Sample will follow soon. 
 
Thank you,
Oleg
0 Kudos
Oleg_M_Intel
Employee
5,583 Views

In order to run hello_classification C++ example I need to refer to some model and image. I am new for OpenVino, can you, please recommend those data? 

 

Thanks, 

Oleg

0 Kudos
Hairul_Intel
Moderator
5,566 Views

Hi Oleg,

In order to run the hello_classification C++ sample, you'd need to build the sample first by following the Build the Sample Applications guide.

 

Next, referring to the Hello Classification C++ Sample, you can try running the sample with the recommended googlenet-v1 model with car.bmp image sample.

 

Follow the following steps to download and convert the googlenet-v1 model:

1.Install the openvino-dev Python package to use Open Model Zoo Tools:

python -m pip install openvino-dev[caffe]

2.Download a pre-trained model using:

omz_downloader --name googlenet-v1

3.If a model is not in the IR or ONNX format, it must be converted. You can do this using the model converter:

omz_converter --name googlenet-v1

 

You can download the car.bmp sample image from the storage in the "images" directory.

 

Finally, perform inference of the hello_classification sample by running the following command:

hello_classification googlenet-v1.xml car.bmp GPU

 

 

Regards,

Hairul

 

 


0 Kudos
Oleg_M_Intel
Employee
5,541 Views

Thanks, Hairul,

 

I have some errors at running inference as from your data:

[ INFO ] Build ................................. 2023.2.0-13089-cfd42bd2cb0-HEAD
[ INFO ]
[ INFO ] Loading model files: googlenet-v1.xml
Exception from src/inference/src/core.cpp:99:
Exception from src/inference/src/model_reader.cpp:137:
Unable to read the model: googlenet-v1.xml Please check that model format: xml is supported, and the model is correct. Available frontends: tflite pytorch paddle tf onnx ir

 
BTW, on my Arch Linux distro (kernel 6.6.9) I can see GPU (arc family), vpu failed, but GPU works as expected, performance not bad for Array Fire. This is what output from sample "Hello_query_device" in Arch Linux: 
[ INFO ] Build ................................. 2023.2.0-13089-cfd42bd2cb0-HEAD
[ INFO ]
[ INFO ] Available devices:
[ INFO ] CPU
[ INFO ] SUPPORTED_PROPERTIES:
[ INFO ] Immutable: AVAILABLE_DEVICES : ""
[ INFO ] Immutable: RANGE_FOR_ASYNC_INFER_REQUESTS : 1 1 1
[ INFO ] Immutable: RANGE_FOR_STREAMS : 1 22
[ INFO ] Immutable: FULL_DEVICE_NAME : Intel(R) Core(TM) Ultra 7 155H
[ INFO ] Immutable: OPTIMIZATION_CAPABILITIES : FP32 FP16 INT8 BIN EXPORT_IMPORT
[ INFO ] Mutable: NUM_STREAMS : 1
[ INFO ] Mutable: AFFINITY : HYBRID_AWARE
[ INFO ] Mutable: INFERENCE_NUM_THREADS : 0
[ INFO ] Mutable: PERF_COUNT : NO
[ INFO ] Mutable: INFERENCE_PRECISION_HINT : f32
[ INFO ] Mutable: PERFORMANCE_HINT : LATENCY
[ INFO ] Mutable: EXECUTION_MODE_HINT : PERFORMANCE
[ INFO ] Mutable: PERFORMANCE_HINT_NUM_REQUESTS : 0
[ INFO ] Mutable: ENABLE_CPU_PINNING : YES
[ INFO ] Mutable: SCHEDULING_CORE_TYPE : ANY_CORE
[ INFO ] Mutable: ENABLE_HYPER_THREADING : YES
[ INFO ] Mutable: DEVICE_ID : ""
[ INFO ] Mutable: CPU_DENORMALS_OPTIMIZATION : NO
[ INFO ] Mutable: CPU_SPARSE_WEIGHTS_DECOMPRESSION_RATE : 1
[ INFO ]
[ INFO ] GNA
[ INFO ] SUPPORTED_PROPERTIES:
[ INFO ] Immutable: AVAILABLE_DEVICES : GNA_SW
[ INFO ] Immutable: OPTIMAL_NUMBER_OF_INFER_REQUESTS : 1
[ INFO ] Immutable: RANGE_FOR_ASYNC_INFER_REQUESTS : 1 1 1
[ INFO ] Immutable: OPTIMIZATION_CAPABILITIES : INT16 INT8 EXPORT_IMPORT
[ INFO ] Immutable: FULL_DEVICE_NAME : GNA_SW
[ INFO ] Immutable: GNA_LIBRARY_FULL_VERSION : 3.5.0.2116
[ INFO ] Mutable: GNA_DEVICE_MODE : GNA_SW_EXACT
[ INFO ] Mutable: PERFORMANCE_HINT : LATENCY
[ INFO ] Mutable: LOG_LEVEL : LOG_NONE
[ INFO ] Immutable: EXECUTION_DEVICES : GNA
[ INFO ] Mutable: GNA_SCALE_FACTOR_PER_INPUT : ""
[ INFO ] Mutable: GNA_FIRMWARE_MODEL_IMAGE : ""
[ INFO ] Mutable: GNA_HW_EXECUTION_TARGET : UNDEFINED
[ INFO ] Mutable: GNA_HW_COMPILE_TARGET : UNDEFINED
[ INFO ] Mutable: GNA_PWL_DESIGN_ALGORITHM : UNDEFINED
[ INFO ] Mutable: GNA_PWL_MAX_ERROR_PERCENT : 1.000000
[ INFO ] Mutable: INFERENCE_PRECISION_HINT : undefined
[ INFO ] Mutable: EXECUTION_MODE_HINT : ACCURACY
[ INFO ] Mutable: PERFORMANCE_HINT_NUM_REQUESTS : 1
[ INFO ]
[ INFO ] GPU
[ INFO ] SUPPORTED_PROPERTIES:
[ INFO ] Immutable: AVAILABLE_DEVICES : 0
[ INFO ] Immutable: RANGE_FOR_ASYNC_INFER_REQUESTS : 1 2 1
[ INFO ] Immutable: RANGE_FOR_STREAMS : 1 2
[ INFO ] Immutable: OPTIMAL_BATCH_SIZE : 1
[ INFO ] Immutable: MAX_BATCH_SIZE : 1
[ INFO ] Immutable: DEVICE_ARCHITECTURE : GPU: vendor=0x8086 arch=v785.128.0
[ INFO ] Immutable: FULL_DEVICE_NAME : Intel(R) Graphics [0x7d55] (iGPU)
[ INFO ] Immutable: DEVICE_UUID : 8680557d080000000002000000000000
[ INFO ] Immutable: DEVICE_LUID : 409a0000499a0000
[ INFO ] Immutable: DEVICE_TYPE : integrated
[ INFO ] Immutable: DEVICE_GOPS : {f16:9216,f32:4608,i8:18432,u8:18432}
[ INFO ] Immutable: OPTIMIZATION_CAPABILITIES : FP32 BIN FP16 INT8 EXPORT_IMPORT
[ INFO ] Immutable: GPU_DEVICE_TOTAL_MEM_SIZE : 26664153088
[ INFO ] Immutable: GPU_UARCH_VERSION : 785.128.0
[ INFO ] Immutable: GPU_EXECUTION_UNITS_COUNT : 128
[ INFO ] Immutable: GPU_MEMORY_STATISTICS : ""
[ INFO ] Mutable: PERF_COUNT : NO
[ INFO ] Mutable: MODEL_PRIORITY : MEDIUM
[ INFO ] Mutable: GPU_HOST_TASK_PRIORITY : MEDIUM
[ INFO ] Mutable: GPU_QUEUE_PRIORITY : MEDIUM
[ INFO ] Mutable: GPU_QUEUE_THROTTLE : MEDIUM
[ INFO ] Mutable: GPU_ENABLE_LOOP_UNROLLING : YES
[ INFO ] Mutable: GPU_DISABLE_WINOGRAD_CONVOLUTION : NO
[ INFO ] Mutable: CACHE_DIR : ""
[ INFO ] Mutable: PERFORMANCE_HINT : LATENCY
[ INFO ] Mutable: EXECUTION_MODE_HINT : PERFORMANCE
[ INFO ] Mutable: COMPILATION_NUM_THREADS : 22
[ INFO ] Mutable: NUM_STREAMS : 1
[ INFO ] Mutable: PERFORMANCE_HINT_NUM_REQUESTS : 0
[ INFO ] Mutable: INFERENCE_PRECISION_HINT : f16
[ INFO ] Mutable: ENABLE_CPU_PINNING : NO
[ INFO ] Mutable: DEVICE_ID : 0
[ INFO ]
0 Kudos
Hairul_Intel
Moderator
5,505 Views

Hi Oleg,

From my end, I was able to run the hello_classification C++ sample without issue. Please make sure that your Python package OpenVINO Development Tools are of same version with your archive package:

06098409_1.png

 

Install the latest openvino-dev Python package for converting the googlenet-v1 model:

python -m pip install openvino-dev[caffe]

 

Furthermore, please ensure both googlenet-v1.xml and googlenet-v1.bin files are in the same directory and let me know if this fixes the issue.

 

 

Regards,

Hairul

 

0 Kudos
Oleg_M_Intel
Employee
5,489 Views

Ok, after I put everything together in one folder, I got working samples working.

 

Still no GPU:

===

python3 hello_classification.py googlenet-v1.xml car_1.bmp GPU
[ INFO ] Creating OpenVINO Runtime Core
[ INFO ] Reading the model: googlenet-v1.xml
[ INFO ] Loading the model to the plugin
Traceback (most recent call last):
  File "/home/ol/samples/python/hello_classification/hello_classification.py", line 113, in <module>
    sys.exit(main())
  File "/home/ol/samples/python/hello_classification/hello_classification.py", line 79, in main
    compiled_model = core.compile_model(model, device_name)
  File "/opt/intel/openvino_2023.2.0/python/openvino/runtime/ie_api.py", line 543, in compile_model
    super().compile_model(model, device_name, {} if config is None else config),
RuntimeError: Exception from src/inference/src/core.cpp:113:
[ GENERAL_ERROR ] Check 'all_devices.size() > idx' failed at src/plugins/proxy/src/plugin.cpp:512:
Cannot get fallback device for index: 0. The total number of found devices is 0
===

 

Interesting output for NPU:

===

python3 hello_classification.py googlenet-v1.xml car_1.bmp NPU.3700
[ INFO ] Creating OpenVINO Runtime Core
[ INFO ] Reading the model: googlenet-v1.xml
[ INFO ] Loading the model to the plugin
Traceback (most recent call last):
  File "/home/ol/samples/python/hello_classification/hello_classification.py", line 113, in <module>
    sys.exit(main())
  File "/home/ol/samples/python/hello_classification/hello_classification.py", line 79, in main
    compiled_model = core.compile_model(model, device_name)
  File "/opt/intel/openvino_2023.2.0/python/openvino/runtime/ie_api.py", line 543, in compile_model
    super().compile_model(model, device_name, {} if config is None else config),
RuntimeError: Exception from src/inference/src/core.cpp:113:
[ GENERAL_ERROR ] Exception from src/vpux_plugin/src/plugin.cpp:579:
[ GENERAL_ERROR ] Got an error during compiler creation: Cannot load library '/opt/intel/openvino_2023.2.0/runtime/lib/intel64/libnpu_driver_compiler_adapter.so': libze_loader.so.1: cannot open shared object file: No such file or directory
===

 

I have a module libopenvino_intel_npu_plugin.so and  libnpu_driver_compiler_adapter.so, but where is libze_loader.so.1? Should I update LD_LIBRARY_PATH or install level zero outside of open vino/npu driver?

0 Kudos
Oleg_M_Intel
Employee
5,485 Views

Forgot to mention - for CPU inference works and produces outputs. 

0 Kudos
Oleg_M_Intel
Employee
5,479 Views

After I have installed level_zero and updated my LD_LIBRARY_PATH I have another error:

python3 hello_classification.py googlenet-v1.xml car_1.bmp NPU.3700
[ INFO ] Creating OpenVINO Runtime Core
[ INFO ] Reading the model: googlenet-v1.xml
[ INFO ] Loading the model to the plugin
Traceback (most recent call last):
  File "/home/ol/samples/python/hello_classification/hello_classification.py", line 113, in <module>
    sys.exit(main())
  File "/home/ol/samples/python/hello_classification/hello_classification.py", line 79, in main
    compiled_model = core.compile_model(model, device_name)
  File "/opt/intel/openvino_2023.2.0/python/openvino/runtime/ie_api.py", line 543, in compile_model
    super().compile_model(model, device_name, {} if config is None else config),
RuntimeError: Exception from src/inference/src/core.cpp:113:
[ GENERAL_ERROR ] Exception from src/vpux_plugin/src/plugin.cpp:579:
[ GENERAL_ERROR ] Got an error during compiler creation: [ GENERAL_ERROR ] LevelZeroCompilerAdapter: Failed to initialize zeAPI. Error code: 78000001
Please make sure that the device is available.

 

 

0 Kudos
Hairul_Intel
Moderator
5,456 Views

Hi Oleg,

Thank you for sharing all the information.

 

We're investigating this issue and will update you on any findings as soon as possible.

 

 

Regards,

Hairul


0 Kudos
Oleg_M_Intel
Employee
5,093 Views

Some updates from my side. After working on testing linux_npu_driver and making some tricks (https://github.com/intel/linux-npu-driver/issues/15 ), my OS was able to recognize NPU device (hello_query_device) 

--- 

[ INFO ] NPU
[ INFO ] SUPPORTED_PROPERTIES:
[ INFO ] Immutable: AVAILABLE_DEVICES : 3720
[ INFO ] Immutable: CACHE_DIR : ""
[ INFO ] Immutable: CACHING_PROPERTIES : DEVICE_ARCHITECTURE NPU_COMPILATION_MODE_PARAMS NPU_DMA_ENGINES NPU_DPU_GROUPS NPU_COMPILATION_MODE NPU_DRIVER_VERSION NPU_COMPILER_TYPE NPU_USE_ELF_COMPILER_BACKEND
[ INFO ] Immutable: DEVICE_ARCHITECTURE : 3720
[ INFO ] Mutable: DEVICE_ID : ""
[ INFO ] Immutable: DEVICE_UUID : 80d1d11eb73811eab3de0242ac130004
[ INFO ] Immutable: FULL_DEVICE_NAME : Intel(R) AI Boost
[ INFO ] Immutable: INTERNAL_SUPPORTED_PROPERTIES : CACHING_PROPERTIES
[ INFO ] Mutable: LOG_LEVEL : LOG_NONE
[ INFO ] Mutable: NPU_COMPILATION_MODE : ""
[ INFO ] Mutable: NPU_COMPILATION_MODE_PARAMS : ""
[ INFO ] Mutable: NPU_COMPILER_TYPE : DRIVER
[ INFO ] Immutable: NPU_DEVICE_ALLOC_MEM_SIZE : 0
[ INFO ] Immutable: NPU_DEVICE_TOTAL_MEM_SIZE : 33328549888
[ INFO ] Mutable: NPU_DMA_ENGINES : -1
[ INFO ] Mutable: NPU_DPU_GROUPS : -1
[ INFO ] Immutable: NPU_DRIVER_VERSION : 16866772
[ INFO ] Mutable: NPU_PLATFORM : AUTO_DETECT
[ INFO ] Mutable: NPU_PRINT_PROFILING : NONE
[ INFO ] Mutable: NPU_PROFILING_OUTPUT_FILE : ""
[ INFO ] Mutable: NPU_USE_ELF_COMPILER_BACKEND : AUTO
[ INFO ] Immutable: NUM_STREAMS : 1
[ INFO ] Immutable: OPTIMAL_NUMBER_OF_INFER_REQUESTS : 1
[ INFO ] Immutable: OPTIMIZATION_CAPABILITIES : FP16 INT8 EXPORT_IMPORT
[ INFO ] Mutable: PERFORMANCE_HINT : LATENCY
[ INFO ] Mutable: PERFORMANCE_HINT_NUM_REQUESTS : 1
[ INFO ] Mutable: PERF_COUNT : NO
[ INFO ] Immutable: RANGE_FOR_ASYNC_INFER_REQUESTS : 1 10 1
[ INFO ] Immutable: RANGE_FOR_STREAMS : 1 4
[ INFO ] 

--- 

However, I was not able to run hello_classification:

---

 

python3 hello_classification.py googlenet-v1.xml car_1.bmp NPU.3720
[ INFO ] Creating OpenVINO Runtime Core
[ INFO ] Reading the model: googlenet-v1.xml
[ INFO ] Loading the model to the plugin
error: FeasibleAllocation failed : Scheduler failure, cannot schedule anything and there is no buffer to spill
Traceback (most recent call last):
  File "/home/ol/samples/python/hello_classification/hello_classification.py", line 113, in <module>
    sys.exit(main())
  File "/home/ol/samples/python/hello_classification/hello_classification.py", line 79, in main
    compiled_model = core.compile_model(model, device_name)
  File "/opt/intel/openvino_2023.2.0/python/openvino/runtime/ie_api.py", line 543, in compile_model
    super().compile_model(model, device_name, {} if config is None else config),
RuntimeError: Exception from src/inference/src/core.cpp:113:
[ GENERAL_ERROR ] Exception from src/vpux_plugin/src/plugin.cpp:579:
LevelZeroCompilerInDriver: Failed to compile network. Error code: 2147483646. Compilation failed
Failed to create executable

---

Also, my Ubuntu can recognize NPU, but not GPU, while my Arch Linux can recognize GPU, but not NPU (NPU driver is not working properly).

 

Hopefully, some time later, at Ubuntu 24.04 LTS release or/and Linux 6.8 release, it will be better support for Meteor Lake on Linux. Do we have a hope? 

0 Kudos
Hari_B_Intel
Moderator
5,078 Views

Hi Oleg,


Sorry for the late response. The team is still working on some fixes, and I'm glad that you were able to resolve the issue.

Our developer team is still working on fixing the bug in the NPU driver.

I will get back to you once we have the solution.


Regarding Python Hello_classification, we observed the same, and the developer is still working on this; I can suggest using C++ code, which should work on NPU. Please let me know if you still facing the issue in C++ code.


Thank you


0 Kudos
Oleg_M_Intel
Employee
5,050 Views

Python is just a wrapper around C++ engine, so I have fundamentally same errors:

./hello_classification googlenet-v1.xml car_1.bmp NPU.3720
[ INFO ] Build ................................. 2023.2.0-13089-cfd42bd2cb0-HEAD
[ INFO ]
[ INFO ] Loading model files: googlenet-v1.xml
[ INFO ] model name: GoogleNet
[ INFO ]     inputs
[ INFO ]         input name: data
[ INFO ]         input type: f32
[ INFO ]         input shape: [1,3,224,224]
[ INFO ]     outputs
[ INFO ]         output name: prob
[ INFO ]         output type: f32
[ INFO ]         output shape: [1,1000]
error: FeasibleAllocation failed : Scheduler failure, cannot schedule anything and there is no buffer to spill
Exception from src/inference/src/core.cpp:113:
[ GENERAL_ERROR ] Exception from src/vpux_plugin/src/plugin.cpp:579:
LevelZeroCompilerInDriver: Failed to compile network. Error code: 2147483646. Compilation failed
Failed to create executable

 

Another problem - I still can't see GPU device on Ubuntu, while I  can see it on Arch Linux - do we have some path to solve missing GPU device on Ubuntu ? 

0 Kudos
Enlin
New Contributor I
4,348 Views

Hi,

I get same issue on Windows 11, Intel Core Ultra CPU.

I had installed the NPU driver succefsully, in the runtime of openvino 2023.3 run the code by C++ , and got the result

 

[ INFO ] Build ................................. 2023.3.0-13775-ceeafaf64f3-releases/2023/3
[ INFO ]
[ INFO ] Available devices:
[ INFO ] CPU
[ INFO ] GNA.GNA_SW
[ INFO ] GNA.GNA_HW
[ INFO ] GPU

 

cann't see "NPU" in the list.

 

please help, thanks.

 

Enlin Jiang.

 

 

 

 

0 Kudos
ep150de
Employee
4,285 Views

I have the same laptop MSI Prestige EVO AI, attempting to run the openvino sample jupyter notebooks. The NPU doesn't show up in the 108-gpu-device.ipynb and only lists GPU/CPU.

 

Repro steps:

install latest NPU driver

install openvino sample notebooks via github source

run jupyter notebooks

start notebook 108-gpu-device.ipynb

run through notebook code samples

producing only GPU/CPU as devices

shows device error when forcing to select NPU

 

0 Kudos
Hairul_Intel
Moderator
4,211 Views

Hi Oleg,

Thank you for your patience.

 

We just got feedback from the relevant teams regarding this issue.

 

After running multiple tests with python and C++ sample applications.

In the case of hello_classification_sample, this app adds an additional preprocessing layer hello_classification.py#L65 to the model for rescaling the input image. It looks like the error comes from this particular interpolation layer, as can be seen in the debug output:

helloclassification_interpolation_error.png

 

The issue can be worked around by using a different input image size, than what is used by car_1.bmp (749x637)

For example we tried modifying the input image to 700x600 or 800x700 and in both cases the application worked fine, both in Python and C++:

helloclassification_workin.png

 

We are continuing to work on the interpolation layer failing with specific input sizes.

 

 

Regards,

Hairul

 

0 Kudos
Hairul_Intel
Moderator
3,930 Views

Hi Oleg,

This thread will no longer be monitored since we have provided information. If you need any additional information from Intel, please submit a new question.

 

 

Regards,

Hairul


0 Kudos
Reply