Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Failed to create plugin /opt/intel/openvino_2020.2.120/deployment_tools/inference_engine/lib/intel64/libclDNNPlugin.so

Sertkaya__Orhan
Beginner
3,985 Views

Firstly, I followed instructions here

I received an error while trying to run this command:

python3 /opt/intel/openvino_2020.2.120/deployment_tools/open_model_zoo/demos/python_demos/object_detection_demo_yolov3_async/object_detection_demo_yolov3_async.py -i sample.mp4 -m yolov3/frozen_darknet_yolov3_model.xml -d GPU

Output of the command:

[ INFO ] Creating Inference Engine...
[ INFO ] Loading network files:
    yolov3/frozen_darknet_yolov3_model.xml
    yolov3/frozen_darknet_yolov3_model.bin
/opt/intel/openvino_2020.2.120/deployment_tools/open_model_zoo/demos/python_demos/object_detection_demo_yolov3_async/object_detection_demo_yolov3_async.py:184: DeprecationWarning: Reading network using constructor is deprecated. Please, use IECore.read_network() method instead
  net = IENetwork(model=model_xml, weights=model_bin)
[ INFO ] Preparing inputs
[ INFO ] Loading model to the plugin
/usr/lib/python3/dist-packages/apport/report.py:13: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
  import fnmatch, glob, traceback, errno, sys, atexit, locale, imp
Traceback (most recent call last):
  File "/opt/intel/openvino_2020.2.120/deployment_tools/open_model_zoo/demos/python_demos/object_detection_demo_yolov3_async/object_detection_demo_yolov3_async.py", line 362, in <module>
    sys.exit(main() or 0)
  File "/opt/intel/openvino_2020.2.120/deployment_tools/open_model_zoo/demos/python_demos/object_detection_demo_yolov3_async/object_detection_demo_yolov3_async.py", line 233, in main
    exec_net = ie.load_network(network=net, num_requests=2, device_name=args.device)
  File "ie_api.pyx", line 178, in openvino.inference_engine.ie_api.IECore.load_network
  File "ie_api.pyx", line 187, in openvino.inference_engine.ie_api.IECore.load_network
RuntimeError: Failed to create plugin /opt/intel/openvino_2020.2.120/deployment_tools/inference_engine/lib/intel64/libclDNNPlugin.so for device GPU
Please, check your environment
[CLDNN ERROR]. No GPU device was found.

My CPU is Intel® Core™ i5-6600K

Processor Graphics: Intel® HD Graphics 530

According to my research, I think these are the GPUs currently supported. => supported-platforms 

Is this why I'm getting an error?

0 Kudos
5 Replies
SIRIGIRI_V_Intel
Employee
3,985 Views

Hi Orhan,

Can you confirm that you have followed the steps to configure Intel Processor Graphics(GPU).

If yes, the issue might be due to driver, Please uninstall the OpenVINO and drivers. Then restart the system to reflect the changes.

Then install the OpenVINO and drivers again.

Hope this helps.

Regards,

Ram prasad

0 Kudos
Sertkaya__Orhan
Beginner
3,985 Views

First of all thank you for your feedback.

I could only see dGPU with this command.

lspci | grep -i vga => 01:00.0 VGA compatible controller: NVIDIA Corporation GM206 [GeForce GTX 960] (rev a1)

According to my research, I learned that I need to enable iGPU Multi-monitor from BIOS setting.

I followed this path on BIOS. => 'Advanced' menu > System Agent (SA) Configuration\Graphics Configuration > iGPU Multi-Monitor setting > Enable

After that I ran commad again

lspci | grep -i vga => 

01:00.0 VGA compatible controller: NVIDIA Corporation GM206 [GeForce GTX 960] (rev a1)

00:02.0 VGA compatible controller: Intel Corporation HD Graphics 530 (rev 06)

After this steps, I was able to run the command I tried before with iGPU.

python3 /opt/intel/openvino_2020.2.120/deployment_tools/open_model_zoo/demos/python_demos/object_detection_demo_yolov3_async/object_detection_demo_yolov3_async.py -i sample.mp4 -m yolov3/frozen_darknet_yolov3_model.xml -d GPU

But when I want to run this model with C++, I get an error like this.

orhan@orhan:~/omz_demos_build/intel64/Release$ ./object_detection_demo_yolov3_async -i ~/Downloads/tensorflow-yolo-v3-master/sample.mp4 -m ~/Downloads/tensorflow-yolo-v3-master/yolov3/frozen_darknet_yolov3_model.xml -d GPU
InferenceEngine: 0x7ff0ee27b030
[ INFO ] Parsing input parameters
[ INFO ] Reading input
[ INFO ] Loading Inference Engine
[ INFO ] Device info: 
    GPU
    clDNNPlugin version ......... 2.1
    Build ........... 42025
[ INFO ] Loading network files
[ INFO ] Checking that the inputs are as the demo expects
[ INFO ] Checking that the outputs are as the demo expects
[ INFO ] Loading model to the device
[ INFO ] Start inference 
To close the application, press 'CTRL+C' here or switch to the output window and press ESC key
To switch between sync/async modes, press TAB key in the output window
[ ERROR ] Can't get ngraph::Function. Make sure the provided model is in IR version 10 or greater.
orhan@orhan:~/omz_demos_build/intel64/Release$ 
 

Why am I getting this error? How can I fix it?

My OpenVINO version is openvino_2020.2.120.

0 Kudos
SIRIGIRI_V_Intel
Employee
3,987 Views

Can you run the verification scripts with GPU, because the error seems to be the model incompatible issue.

Change the working directory to /opt/intel/openvino/deployment_tools/demo and run the below command:

./demo_security_barrier_camera.sh -d GPU

Let us know the results.

Regards,

Ram prasad

0 Kudos
ShivSD
Beginner
3,456 Views

Hi ,

I ran the demo application, same GPU not found error. Please can you suggest on how to enable GPU

openvino build : 2020 4

OS: ubuntu 18.04

lspci | grep -i vga
00:02.0 VGA compatible controller: Intel Corporation Device 3e9b (rev 02)
01:00.0 VGA compatible controller: NVIDIA Corporation Device 1e90 (rev a1)

Run Inference Engine security_barrier_camera demo

Run ./security_barrier_camera_demo -d GPU -d_va GPU -d_lpr GPU -i /opt/intel/openvino/deployment_tools/demo/car_1.bmp -m /home/nagarajashivashankar/openvino_models/ir/intel/vehicle-license-plate-detection-barrier-0106/FP16/vehicle-license-plate-detection-barrier-0106.xml -m_lpr /home/nagarajashivashankar/openvino_models/ir/intel/license-plate-recognition-barrier-0001/FP16/license-plate-recognition-barrier-0001.xml -m_va /home/nagarajashivashankar/openvino_models/ir/intel/vehicle-attributes-recognition-barrier-0039/FP16/vehicle-attributes-recognition-barrier-0039.xml

[ INFO ] InferenceEngine: 0x7fa8c353a030
[ INFO ] Files were added: 1
[ INFO ] /opt/intel/openvino/deployment_tools/demo/car_1.bmp
[ INFO ] Loading device GPU
[ ERROR ] Failed to create plugin /opt/intel/openvino_2020.4.287/deployment_tools/inference_engine/lib/intel64/libclDNNPlugin.so for device GPU
Please, check your environment
[CLDNN ERROR]. No GPU device was found.

 

 

0 Kudos
GeerthanaaRavi
Employee
2,542 Views

Hi ShivSD,

 

Have you found a solution for this?

0 Kudos
Reply