Community
cancel
Showing results for 
Search instead for 
Did you mean: 
shin_47
Beginner
370 Views

OpenVINO 2020.3LTS/ SSD 512 benchmark does not work on GPU

I have a problem.

The object detection model (SSD 512) could be changed to IR format by the model optimizer.
However, when I run benchmark _ app.py on the GPU, I get a runtime error.
When I run benchmark _ app.py on a CPU or VPU, it works fine.
How can I make this work with GPU?

※Other models do not have problems with GPU inference.

 

<Model>
Chainercv SSD512 : https://chainercv.readthedocs.io/en/stable/reference/links/ssd.html#chainercv.links.model.ssd.SSD512

※I'm sorry, but I can't share the model.

 

<Environment>
OS : Ubuntu 18.04.04
CPU : Intel 7th Gen Core i5-7300U
GPU : 
Intel ® HD Graphics 620
OpenVINO version : 2020.3 LTS

※Other versions (2020.2 and 2020.4) did not resolve this issue.

<Execute Command>
$ cd /opt/intel/openvino/deployment_tools/tools/benchmark_tool
$ python3 benchmark_app.py -m model.xml --target_device GPU

<Command Output>

[Step 1/11] Parsing and validating input arguments
[ WARNING ] -nstreams default value is determined automatically for a device. Although the automatic selection usually provides a reasonable performance, but it still may be non-optimal for some cases, for more information look at README.
[Step 2/11] Loading Inference Engine
[ INFO ] InferenceEngine:
API version............. 2.1.2020.3.0-3467-15f2c61a-releases/2020/3
[ INFO ] Device info
GPU
clDNNPlugin............. version 2.1
Build................... 2020.3.0-3467-15f2c61a-releases/2020/3

[Step 3/11] Reading the Intermediate Representation network
[ INFO ] Read network took 107.08 ms
[Step 4/11] Resizing network to match image sizes and given batch
[ INFO ] Network batch size: 1
[Step 5/11] Configuring input of the model
[Step 6/11] Setting device configuration
[Step 7/11] Loading the model to the device
[ ERROR ] Error has occured for: normalize:Mul_0
Scale feature size(=2097152) is not equal to: input feature size(=512)
Traceback (most recent call last):
File "/opt/intel/openvino_2020.3.194/python/python3.6/openvino/tools/benchmark/main.py", line 87, in run
exe_network = benchmark.load_network(ie_network, perf_counts)
File "/opt/intel/openvino_2020.3.194/python/python3.6/openvino/tools/benchmark/benchmark.py", line 138, in load_network
num_requests=1 if self.api_type == 'sync' else self.nireq or 0)
File "ie_api.pyx", line 178, in openvino.inference_engine.ie_api.IECore.load_network
File "ie_api.pyx", line 187, in openvino.inference_engine.ie_api.IECore.load_network
RuntimeError: Error has occured for: normalize:Mul_0
Scale feature size(=2097152) is not equal to: input feature size(=512)

 

Please tell me how to solve this problem.

0 Kudos
7 Replies
Sahira_Intel
Moderator
319 Views

Hi Shin_47,

I would just like to confirm that you are not using Docker as I believe you had a similar issue when running inside the container. 

I am escalating this issue for further investigation.

Best Regards,
Sahira 

shin_47
Beginner
310 Views

Hi Sahira,

 

Thank you for your reply.

I'm not using Docker this time.

 

Best Regards,
Shin_47

Sahira_Intel
Moderator
296 Views

Hi Shin_47,

In the meantime, have you tried OpenVINO's SSD 512 model? It is part of the Open Model Zoo and can be found here. You should not get any errors running on the GPU and using the latest version of OpenVINO 2021.1.

Best Regards,

Sahira

shin_47
Beginner
285 Views

Hi Sahira,

 

I tried the SSD 512 model from OpenModelZoo.
As a result, using the latest versions of OpenVINO 2021.1 and 2020.3 on the GPU did not cause an error.

I compared my SSD 512 model to OpenModelZoo's SSD 512 model.
There were some differences, but what bothered me most was that error messages had layer names.
The layer with layer id 57. (On OpenModelZoo SSD 512, the name of the tier is "conv4 _ 3 _ norm")
The differences are listed below.

------------------------------------------------------------------------------

◆my SSD 512

<layer id="57" name="Mul_0" type="Multiply" version="opset1">
    <input>
        <port id="0">
            <dim>1</dim>
            <dim>512</dim>
            <dim>64</dim>
            <dim>64</dim>
        </port>
        <port id="1">
            <dim>1</dim>
            <dim>512</dim>
            <dim>64</dim>
            <dim>64</dim>
        </port>
    </input>
    <output>
        <port id="2" precision="FP32">
            <dim>1</dim>
            <dim>512</dim>
            <dim>64</dim>
            <dim>64</dim>
        </port>
    </output>
    </layer>

◆OpenModelZoo's SSD 512 model.

<layer id="57" name="conv4_3_norm" type="Multiply" version="opset1">
    <input>
        <port id="0">
            <dim>1</dim>
            <dim>512</dim>
            <dim>64</dim>
            <dim>64</dim>
        </port>
        <port id="1">
            <dim>1</dim>
            <dim>512</dim>
            <dim>1</dim>
            <dim>1</dim>
        </port>
    </input>
    <output>
        <port id="2" precision="FP32">
            <dim>1</dim>
            <dim>512</dim>
            <dim>64</dim>
            <dim>64</dim>
        </port>
    </output>
</layer>

------------------------------------------------------------------------------

Based on this information, what is the cause of the error?

 

Best Regards,

Shin_47

Sahira_Intel
Moderator
252 Views

Hi Shin_47,

Thank you for your patience. It is difficult to pinpoint the exact cause of the error without seeing and testing the model, but I will let you know as soon as we find something to share with you.

Best Regards,

Sahira 

shin_47
Beginner
242 Views

Hi Sahira,


Thank you.
I am looking forward to sharing useful information from you.


Best Regards,

Shin_47

Sahira_Intel
Moderator
202 Views

Hi,

It looks like you forgot to pass proper mean/scale values to the Model Optimizer during the conversion. For OpenVINO SSD512 see the documentation here: https://docs.openvinotoolkit.org/latest/omz_models_public_ssd512_ssd512.html 

Please review your IR file and check what normalization used for preprocessing and convert the model accordingly again.

Best Regards,

Sahira 

Reply