Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Unable to configure GPU clDNNPlugin when using OpenVINO Python API (2019.R2)

Mike_L_3
Beginner
1,232 Views

Hi all,

I am evaluating object detection models and am currently unable to get the model_downloader pre-trained models to run when targeting the GPU using the Python API due to unsupported layers. I am basing my implementation on the `object_detection_demo_ssd_async` python demo, attempting to use the `set_config` method of IECore to load the `cldnn_global_custom_kernels.xml` configuration for the GPU...

openvino_dir = '/opt/intel/openvino'
openvino_lib_dir = openvino_dir + '/deployment_tools/inference_engine/lib/intel64/'
gpu_config = openvino_lib_dir + 'cldnn_global_custom_kernels/cldnn_global_custom_kernels.xml'

class Detector:
	def __init__(self, model_xml, labels_path, device='GPU'):
		self.ie  = IECore()

		if 'GPU' in self.ie.available_devices and device == 'GPU':
			# Getting problems with a bunch of different layers, including PriorBoxClustered, which
			# we expect should be covered by gpu_config (cldnn_global_custom_kernels.xml)
			log.debug('Loading GPU config')
			self.device = 'GPU'
			self.ie.set_config(config={'CONFIG_FILE': gpu_config}, device_name=self.device)
		elif device == 'CPU':
			log.debug('Loading CPU extensions')
			self.device = 'CPU'
			for ext in ['libcpu_extension_avx2.so', 'libcpu_extension_avx512.so', 'libcpu_extension_sse4.so']:
				self.ie.add_extension(openvino_lib_dir + ext, 'CPU')

Running GPU inference on the `ssd_mobilenet_v2_coco` model results in the following unsupported layers:

[ERROR] Following layers are not supported by the plugin for GPU:
	image_tensor
	PriorBoxClustered_5
	PriorBoxClustered_4
	PriorBoxClustered_3
	PriorBoxClustered_2
	PriorBoxClustered_1
	PriorBoxClustered_0
	ConcatPriorBoxesClustered
	do_reshape_loc/value/Output_0/Data__const
	do_reshape_conf/value/Output_0/Data__const
	do_reshape_conf
	do_reshape_loc
	DetectionOutput   

I have tried most of the other pre-trained models available with model_downloader and they all have similar issues. I am running on an Intel i7-8550U with Intel UHD Graphics 620 (rev 07). I know that my GPU is capable of running the model because I am able to run inference on the GPU using the `object_detection_sample_ssd` C++ sample:

openvino@f81d2a7effb7:~/inference_engine_samples_build/intel64/Release$ ./object_detection_sample_ssd -m ~/object_detection/common/ssd_mobilenet_v2_coco/tf/FP32/ssd_mobilenet_v2_coco.xml -i ~/src/input.jpg -d GPU
[ INFO ] InferenceEngine: 
	API version ............ 2.0
	Build .................. custom_releases/2019/R2_f5827d4773ebbe727c9acac5f007f7d94dd4be4e
	Description ....... API
Parsing input parameters
[ INFO ] Files were added: 1
[ INFO ]     /home/openvino/src/input.jpg
[ INFO ] Loading Inference Engine
[ INFO ] Device info: 
	GPU
	clDNNPlugin version ......... 2.0
	Build ........... 27579
[ INFO ] Loading network files:
	/home/openvino/object_detection/common/ssd_mobilenet_v2_coco/tf/FP32/ssd_mobilenet_v2_coco.xml
	/home/openvino/object_detection/common/ssd_mobilenet_v2_coco/tf/FP32/ssd_mobilenet_v2_coco.bin
[ INFO ] Preparing input blobs
[ INFO ] Batch size is 1
[ INFO ] Preparing output blobs
[ INFO ] Loading model to the device
[ INFO ] Create infer request
[ WARNING ] Image is resized from (1080, 675) to (300, 300)
[ INFO ] Batch size is 1
[ INFO ] Start inference
[ INFO ] Processing output blobs
[0,1] element, prob = 0.985137    (759,244)-(1019,671) batch id : 0 WILL BE PRINTED!
[1,1] element, prob = 0.846156    (619,358)-(716,628) batch id : 0 WILL BE PRINTED!
[2,1] element, prob = 0.465734    (547,318)-(576,367) batch id : 0
[3,27] element, prob = 0.74116    (836,352)-(969,568) batch id : 0 WILL BE PRINTED!
[ INFO ] Image out_0.bmp created!
[ INFO ] Execution successful

Based on these results, I have some questions:

  1. Is IECore set_config the correct way to configure the GPU to use the cldnn_global_custom_kernels?
  2. What happened to the "set_affinity" method of IENetwork class? The documentation for IECore set_config says "Usage examples: See the set_affinity method of the IENetwork class", however there is no documented set_affinity method.
  3. Which of the model_downloader models have been validated to run on the GPU, and what configuration is required to get them running? Which are definitely not supported?

I am attaching an archive of my project directory, if you have docker installed then you should be able to re-create the issue I am having by running the following...

    tar zxf openvino-gpu-test.tar.gz
    cd openvino-gpu-test
    make build
    make run-cpu
    make run-gpu

"detect.py" has all of the python code and the Makefile, Dockerfile, and README.md files should make clear how the project is configured.

Thanks for any help or insight!

0 Kudos
3 Replies
Shubha_R_Intel
Employee
1,232 Views

Dear Mike L.,

When you run the exact same code on CPU, can you kindly tell me what happens ? It looks like the OpenVino sample  object_detection_sample_ssd worked fine for you though.

Let me answer your questions:

1) Yes you are correct. Here is the C++ version :

 // clDNN Extensions are loaded from an .xml description and OpenCL kernel files
            ie.SetConfig({ { PluginConfigParams::KEY_CONFIG_FILE, FLAGS_c } }, "GPU");

2) There is a documented "set_default_affinity" method. Please review the ienetworkclass documentation

3) There are some layers unsupported by GPU but not many. Please see The Supported Devices DOC for more info.

I'm confused by your issue because from your writeup above I see that object_detection_sample_ssd succeeded for you. You didn't supply the command-line but I'm assuming -d GPU was used.

Thanks,

Shubha

0 Kudos
Mike_L_3
Beginner
1,232 Views

Hi Shubha, thanks for your response!

I found the source of my issue: the sample code for detecting unsupported layers is only supposed to be run when targeting the CPU, whereas I was also running it when targeting the GPU. I updated the code to merge CPU and GPU layers when checking for unsupported layers and inference is now running correctly on the GPU:

supported_layers = self.ie.query_network(self.net, self.device)
if self.device == 'GPU':
	supported_layers.update(self.ie.query_network(self.net, 'CPU'))
unsupported_layers = [l for l in self.net.layers.keys() if l not in supported_layers]
if len(unsupported_layers) != 0: # network has unsupported layers
	...

All the best!

Mike

0 Kudos
Shubha_R_Intel
Employee
1,232 Views

Dear Mike L.,

All the best to you also !

Super glad that you solved your issue and thanks for reporting your success to the OpenVino community !

Shubha

0 Kudos
Reply