- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I have a problem.
The object detection model (SSD 512) could be changed to IR format by the model optimizer.
However, when I run benchmark _ app.py on the GPU, I get a runtime error.
When I run benchmark _ app.py on a CPU or VPU, it works fine.
How can I make this work with GPU?
<Model>
Chainercv SSD512 : https://chainercv.readthedocs.io/en/stable/reference/links/ssd.html#chainercv.links.model.ssd.SSD512
<Environment>
OS : Ubuntu 18.04.04
CPU : Intel Core i7-7600U Processor
OpenVINO version : 2020.2
<Execute Command>
$ cd /opt/intel/openvino/deployment_tools/tools/benchmark_tool
$ python3 benchmark_app.py -m model.xml --target_device GPU
<Command Output>
[Step 1/11] Parsing and validating input arguments
[ WARNING ] -nstreams default value is determined automatically for a device. Although the automatic selection usually provides a reasonable performance, but it still may be non-optimal for some cases, for more information look at README.
[Step 2/11] Loading Inference Engine
[ INFO ] InferenceEngine:
API version............. 2.1.42025
[ INFO ] Device info
GPU
clDNNPlugin............. version 2.1
Build................... 42025
[Step 3/11] Reading the Intermediate Representation network
[ INFO ] Read network took 110.76 ms
[Step 4/11] Resizing network to match image sizes and given batch
[ INFO ] Network batch size: 1
[Step 5/11] Configuring input of the model
[Step 6/11] Setting device configuration
[Step 7/11] Loading the model to the device
[ ERROR ] Error has occured for: normalize:Mul_0
Scale feature size(=2097152) is not equal to: input feature size(=512)
Traceback (most recent call last):
File "/opt/intel/openvino/python/python3.6/openvino/tools/benchmark/main.py", line 87, in run
exe_network = benchmark.load_network(ie_network, perf_counts)
File "/opt/intel/openvino/python/python3.6/openvino/tools/benchmark/benchmark.py", line 138, in load_network
num_requests=1 if self.api_type == 'sync' else self.nireq or 0)
File "ie_api.pyx", line 178, in openvino.inference_engine.ie_api.IECore.load_network
File "ie_api.pyx", line 187, in openvino.inference_engine.ie_api.IECore.load_network
RuntimeError: Error has occured for: normalize:Mul_0
Scale feature size(=2097152) is not equal to: input feature size(=512)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Shinji,
I recommend following the steps outlined here in the link that I provided earlier. I think the problem is that there is a step missing that is now allowing the Docker container to access the GPU.
Please follow the steps above and try to run your model again.
Let me know if you have any more questions.
Best Regards,
Sahira
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Shinji,
Please share more information about your model, command given to Model Optimizer to convert the trained model to Intermediate Representation (IR), and environment details (versions of Python, CMake, etc.).
If possible, please share the trained model files for us to reproduce your issue.
Regards,
Munesh
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thank you for your reply.
Sorry, I can't share models.
I Share the version of Python and Cmake with the command to convert to IR.
[Environment]
Python Version = 3.6.9
cmake versuib = 3.10.2
[Model Optimizer Command]
$ cd /opt/intel/openvino/deployment_tools/model_optimizer
$ python3 mo.py \
--framework onnx \
--data_type FP32 \
--model_name SSD512_FP32 \
--input_model SSD512.onnx \
--output_dir /home/models
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Shinji,
Some follow up questions for you since you can’t share your model.
- Which GPU are you using?
- Did you complete the additional steps for GPU per the getting started guide? https://docs.openvinotoolkit.org/latest/openvino_docs_install_guides_installing_openvino_linux.html#additional-GPU-steps
- Would you please confirm whether this issue is also seen on the latest OpenVINO toolkit release (2020.4)?
- Are you seeing this issue with any model when running on GPU or is it only with this particular model?
You can try running the following Image Classification verification script to confirm.
./demo_squeezenet_download_convert_run.sh -d GPU
Regards,
Munesh
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Munesh,
I'll answer your question.
1. Which GPU are you using? ⇒I use Intel ® HD Graphics 620.
2. Did you complete the additional steps for GPU per the getting started guide?
⇒I run OpenVINO on Docker.
(image ⇒ https://hub.docker.com/r/openvino/ubuntu18_dev)
Do I still have to do this procedure?
3. Would you please confirm whether this issue is also seen on the latest OpenVINO toolkit release (2020.4)?
⇒I ran OpenVINO 2020.4 on Docker and the problem was not solved.
4. Are you seeing this issue with any model when running on GPU or is it only with this particular model?
⇒I have never seen this problem in other models.
Sincerely,
Shinji
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Shinji,
The GPU is not available in the Docker container by default. You must attach it to the container.
I believe you are getting errors because there are some additional steps to complete before running on GPU inside a Docker Container. See the steps outlined here.
Please let me know if this information is helpful.
Best Regards,
Sahira
- Tags:
- Hi S
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Sahira,
I tried to follow the steps on this site as you told me.
However, instead of creating a Docker image from a Dockerfile, I retrieved a Docker image from DockerHub.
So, using the instructions on the site as a guide, I ran the following command on the currently running container:.
The problem was never solved.
Is the procedure I took wrong?
<Execute Command>
$ mkdir /tmp/opencl
$ usermod -aG video openvino
$ apt-get update && \
apt-get install -y --no-install-recommends ocl-icd-libopencl1 && \
rm -rf /var/lib/apt/lists/* && \
curl -L "https://github.com/intel/compute-runtime/releases/download/19.41.14441/intel-gmmlib_19.3.2_amd64.deb" --output "intel-gmmlib_19.3.2_amd64.deb" && \
curl -L "https://github.com/intel/compute-runtime/releases/download/19.41.14441/intel-igc-core_1.0.2597_amd64.deb" --output "intel-igc-core_1.0.2597_amd64.deb" && \
curl -L "https://github.com/intel/compute-runtime/releases/download/19.41.14441/intel-igc-opencl_1.0.2597_amd64.deb" --output "intel-igc-opencl_1.0.2597_amd64.deb" && \
curl -L "https://github.com/intel/compute-runtime/releases/download/19.41.14441/intel-opencl_19.41.14441_amd64.deb" --output "intel-opencl_19.41.14441_amd64.deb" && \
curl -L "https://github.com/intel/compute-runtime/releases/download/19.41.14441/intel-ocloc_19.41.14441_amd64.deb" --output "intel-ocloc_19.41.14441_amd64.deb" && \
dpkg -i /tmp/opencl/*.deb && \
ldconfig && \
rm -r /tmp/opencl
<Command Output>
Get:1 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]
Get:2 http://archive.canonical.com/ubuntu bionic InRelease [10.2 kB]
Get:3 http://archive.ubuntu.com/ubuntu bionic InRelease [242 kB]
Get:4 http://archive.canonical.com/ubuntu bionic/partner Sources [1907 B]
Get:5 http://security.ubuntu.com/ubuntu bionic-security/restricted Sources [8723 B]
Get:6 http://security.ubuntu.com/ubuntu bionic-security/universe Sources [221 kB]
Get:7 http://archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB]
Get:8 http://security.ubuntu.com/ubuntu bionic-security/multiverse Sources [3245 B]
Get:9 http://security.ubuntu.com/ubuntu bionic-security/main Sources [213 kB]
Get:10 http://security.ubuntu.com/ubuntu bionic-security/restricted amd64 Packages [116 kB]
Get:11 http://security.ubuntu.com/ubuntu bionic-security/multiverse amd64 Packages [10.1 kB]
Get:12 http://security.ubuntu.com/ubuntu bionic-security/universe amd64 Packages [897 kB]
Get:13 http://archive.ubuntu.com/ubuntu bionic-backports InRelease [74.6 kB]
Get:14 http://security.ubuntu.com/ubuntu bionic-security/main amd64 Packages [1089 kB]
Get:15 http://archive.ubuntu.com/ubuntu bionic/universe Sources [11.5 MB]
Get:16 http://archive.ubuntu.com/ubuntu bionic/main Sources [1063 kB]
Get:17 http://archive.ubuntu.com/ubuntu bionic/multiverse Sources [216 kB]
Get:18 http://archive.ubuntu.com/ubuntu bionic/restricted Sources [5823 B]
Get:19 http://archive.ubuntu.com/ubuntu bionic/restricted amd64 Packages [13.5 kB]
Get:20 http://archive.ubuntu.com/ubuntu bionic/multiverse amd64 Packages [186 kB]
Get:21 http://archive.ubuntu.com/ubuntu bionic/main amd64 Packages [1344 kB]
Get:22 http://archive.ubuntu.com/ubuntu bionic/universe amd64 Packages [11.3 MB]
Get:23 http://archive.ubuntu.com/ubuntu bionic-updates/restricted Sources [11.0 kB]
Get:24 http://archive.ubuntu.com/ubuntu bionic-updates/main Sources [421 kB]
Get:25 http://archive.ubuntu.com/ubuntu bionic-updates/universe Sources [377 kB]
Get:26 http://archive.ubuntu.com/ubuntu bionic-updates/multiverse Sources [7929 B]
Get:27 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages [1384 kB]
Get:28 http://archive.ubuntu.com/ubuntu bionic-updates/universe amd64 Packages [1425 kB]
Get:29 http://archive.ubuntu.com/ubuntu bionic-updates/multiverse amd64 Packages [27.7 kB]
Get:30 http://archive.ubuntu.com/ubuntu bionic-updates/restricted amd64 Packages [132 kB]
Get:31 http://archive.ubuntu.com/ubuntu bionic-backports/universe Sources [4003 B]
Get:32 http://archive.ubuntu.com/ubuntu bionic-backports/main Sources [4301 B]
Get:33 http://archive.ubuntu.com/ubuntu bionic-backports/universe amd64 Packages [8432 B]
Get:34 http://archive.ubuntu.com/ubuntu bionic-backports/main amd64 Packages [8286 B]
Fetched 32.6 MB in 18s (1790 kB/s)
Reading package lists... Done
Reading package lists... Done
Building dependency tree
Reading state information... Done
ocl-icd-libopencl1 is already the newest version (2.2.11-1ubuntu1).
0 upgraded, 0 newly installed, 0 to remove and 92 not upgraded.
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 655 100 655 0 0 1025 0 --:--:-- --:--:-- --:--:-- 1023
100 104k 100 104k 0 0 41060 0 0:00:02 0:00:02 --:--:-- 98192
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 659 100 659 0 0 992 0 --:--:-- --:--:-- --:--:-- 990
100 9.7M 100 9.7M 0 0 617k 0 0:00:16 0:00:16 --:--:-- 971k
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 661 100 661 0 0 1007 0 --:--:-- --:--:-- --:--:-- 1006
100 18.0M 100 18.0M 0 0 1635k 0 0:00:11 0:00:11 --:--:-- 2257k
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 660 100 660 0 0 952 0 --:--:-- --:--:-- --:--:-- 951
100 685k 100 685k 0 0 199k 0 0:00:03 0:00:03 --:--:-- 311k
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 659 100 659 0 0 1142 0 --:--:-- --:--:-- --:--:-- 1142
100 91888 100 91888 0 0 37081 0 0:00:02 0:00:02 --:--:-- 49966
(Reading database ... 49297 files and directories currently installed.)
Preparing to unpack .../intel-gmmlib_19.3.2_amd64.deb ...
Unpacking intel-gmmlib (19.3.2) over (19.3.2) ...
Preparing to unpack .../intel-igc-core_1.0.2597_amd64.deb ...
Unpacking intel-igc-core (1.0.2597) over (1.0.2597) ...
Preparing to unpack .../intel-igc-opencl_1.0.2597_amd64.deb ...
Unpacking intel-igc-opencl (1.0.2597) over (1.0.2597) ...
Preparing to unpack .../intel-ocloc_19.41.14441_amd64.deb ...
Unpacking intel-ocloc (19.41.14441) over (19.41.14441) ...
Preparing to unpack .../intel-opencl_19.41.14441_amd64.deb ...
Unpacking intel-opencl (19.41.14441) over (19.41.14441) ...
Setting up intel-gmmlib (19.3.2) ...
Setting up intel-igc-core (1.0.2597) ...
Setting up intel-igc-opencl (1.0.2597) ...
Setting up intel-ocloc (19.41.14441) ...
Setting up intel-opencl (19.41.14441) ...
Processing triggers for libc-bin (2.27-3ubuntu1) ...
Sincerely,
Shinji
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Shinji,
I recommend following the steps outlined here in the link that I provided earlier. I think the problem is that there is a step missing that is now allowing the Docker container to access the GPU.
Please follow the steps above and try to run your model again.
Let me know if you have any more questions.
Best Regards,
Sahira
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Shinji,
Since we have not heard back, this thread will be closed and no longer monitored. If you have any more questions, please open a new thread.
Thank you,
Sahira

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page