Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6404 Discussions

OpenVINO inference on Intel 11th gen processor iGPU not working

kekpirat
Beginner
1,461 Views

Hello,

I'm trying to use the igpu on my 11th Gen Intel(R) Core(TM) i7-11370H on my laptop, which comes with Iris Xe graphics from what I understand (Graphics is labelled as Mesa Intel® Xe Graphics (TGL GT2) in the Settings>About page from Ubuntu). My Laptop is on Ubuntu 20.04.2 LTS (Linux version 5.11.0-27-generic). I noticed from other threads the output from clinfo can be useful, so I've attached it below.  I've followed the setup instructions for OpenVINO with iGPUs as best I could, e.g. using the `./install_NEO_OCL_driver.sh` which succeeded from what I could see of the output.


The issue appears when I try to pass in "GPU" as the device, such as in the samples/style_transfer_sample. (I used the command `./style_transfer_sample -i images/ -m fast-neural-style-mosaic-onnx.onnx -d GPU`) When I launch it, it gets stuck after printing out "[INFO] Loading model to the device", my screen freezes momentarily (3-5s), and I notice from htop that the application occupies 100% usage of one CPU core while being stuck.

 

Any assistance would be greatly appreciated.

0 Kudos
5 Replies
nikos1
Valued Contributor I
1,410 Views

clinfo looks good, thanks for attaching.

 

It typically takes many seconds to compile all OpenCL kernels for clDNN when -d GPU is used so please allow for some time to init properly. What happens next after a minute?

 

Also please try simpler samples too to make sure -d GPU works as expected.  I am not sure if style_transfer_sample  can run on GPU device - please double check this too.

 

Cheers,

 

nikos

0 Kudos
kekpirat
Beginner
1,371 Views

Thanks for the quick reply!
I tried using the hello_classification sample (https://docs.openvinotoolkit.org/latest/openvino_inference_engine_samples_hello_classification_README.html) instead which lists supported devices as "All", using the model optimised alexnet from the model_downloader, and I get the same behaviour as previously mentioned. I've tried leaving it on for 10+ minutes and it's still stuck (./hello_classification alexnet.xml apple.jpg GPU). It works flawlessly if I use the CPU. 

0 Kudos
nikos1
Valued Contributor I
1,339 Views

You are welcome!   hmm... it seems a bad OpenCL environment.

Let me try here on a similar Tigerlake GPU.

In the meantime, If you like, you could also try

sudo apt-get install intel-opencl-icd

reference: https://github.com/intel/compute-runtime

please get us a clinfo output after that.

Cheers,

nikos

 

 

0 Kudos
Wan_Intel
Moderator
1,311 Views

Hi Nikos1,

Thank you for sharing your answer with us!


Hi Kekpirat,

Thank you for reaching out to us.


For your information, I have validated Hello Classification C++ Sample with squeezenet1.1 using CPU and GPU plugins. The GPU plugin will take a longer time to load the network compared to the CPU plugin.


This is because the GPU plugin introduces a one-time overhead (order of few seconds) of compiling the OpenCL kernels. The compilation happens upon loading the network to the GPU plugin and does not affect the inference time.


To optimize the GPU performance, you may refer to here for more information.


Regards,

Wan


0 Kudos
Wan_Intel
Moderator
1,276 Views

Hi Kekpirat,

 

This thread will no longer be monitored since we have provided a solution.

If you need any additional information from Intel, please submit a new question.

 

 

Regards,

Wan


0 Kudos
Reply