Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6403 Discussions

How to run MULTI-device plugin (OpenVINO 2020.4 with Ubuntu 18.04)

thamml
Novice
604 Views

I am using UP board with UP AI CORE X (MPCI-E form of Intel® Movidius™ Myriad™ X VPU 2485).

I can run Hetero mode but fail in multi mode*. The command still execute but only for CPU device.

***The output below shows hetero command.*****************

/home/thamml/inference_engine_cpp_samples_build/intel64/Release/benchmark_app -d HETERO:HDDL,CPU -m /home/thamml/openvino_models/ir/intel/person-detection-retail-0013/FP16/person-detection-retail-0013.xml -i /opt/intel/openvino/deployment_tools/demo/car.png
[Step 1/11] Parsing and validating input arguments
[ INFO ] Parsing input parameters
[ INFO ] Files were added: 1
[ INFO ] /opt/intel/openvino/deployment_tools/demo/car.png
[Step 2/11] Loading Inference Engine
[ INFO ] InferenceEngine:
API version ............ 2.1
Build .................. 2020.4.0-359-21e092122f4-releases/2020/4
Description ....... API
[ INFO ] Device info:
CPU
MKLDNNPlugin version ......... 2.1
Build ........... 2020.4.0-359-21e092122f4-releases/2020/4
HDDL
HDDLPlugin version ......... 2.1
Build ........... 2020.4.0-359-21e092122f4-releases/2020/4
HETERO
heteroPlugin version ......... 2.1
Build ........... 2020.4.0-359-21e092122f4-releases/2020/4

[Step 3/11] Setting device configuration
[ WARNING ] -nstreams default value is determined automatically for CPU device. Although the automatic selection usually provides a reasonable performance,but it still may be non-optimal for some cases, for more information look at README.
[Step 4/11] Reading the Intermediate Representation network
[ INFO ] Loading network files
[ INFO ] Read network took 242.90 ms
[Step 5/11] Resizing network to match image sizes and given batch
[ INFO ] Network batch size: 1
[Step 6/11] Configuring input of the model
[Step 7/11] Loading the model to the device

[21:32:12.5138][1746]I[ClientManager.cpp:159] client(id:8) registered: clientName=HDDLPlugin socket=2
[21:32:14.1005][1747]I[GraphManager.cpp:491] Load graph success, graphId=7 graphName=ResMobNet_v4 (LReLU) with single SSD head_0
[ INFO ] Load network took 3000.59 ms
[Step 8/11] Setting optimal runtime parameters
[Step 9/11] Creating infer requests and filling input blobs with images
[ INFO ] Network input 'data' precision U8, dimensions (NCHW): 1 3 320 544
[ WARNING ] Some image input files will be duplicated: 4 files are required but only 1 are provided
[ INFO ] Infer Request 0 filling
[ INFO ] Prepare image /opt/intel/openvino/deployment_tools/demo/car.png
[ WARNING ] Image is resized from (787, 259) to (544, 320)
[ INFO ] Infer Request 1 filling
[ INFO ] Prepare image /opt/intel/openvino/deployment_tools/demo/car.png
[ WARNING ] Image is resized from (787, 259) to (544, 320)
[ INFO ] Infer Request 2 filling
[ INFO ] Prepare image /opt/intel/openvino/deployment_tools/demo/car.png
[ WARNING ] Image is resized from (787, 259) to (544, 320)
[ INFO ] Infer Request 3 filling
[ INFO ] Prepare image /opt/intel/openvino/deployment_tools/demo/car.png
[ WARNING ] Image is resized from (787, 259) to (544, 320)
[Step 10/11] Measuring performance (Start inference asyncronously, 4 inference requests using 4 streams for CPU, limits: 60000 ms duration)

[Step 11/11] Dumping statistics report
Count: 880 iterations
Duration: 60312.00 ms
Latency: 275.85 ms
Throughput: 14.59 FPS
[21:33:14.6395][1746]I[ClientManager.cpp:189] client(id:8) unregistered: clientName=HDDLPlugin socket=2
[21:33:14.6515][1747]I[GraphManager.cpp:539] graph(7) destroyed

 

***The output below shows multi command.**************

thamml@thamml-UP-APL01:~$ /home/thamml/inference_engine_cpp_samples_build/intel64/Release/benchmark_app –d MULTI:HDDL,CPU -m /home/thamml/openvino_models/ir/intel/person-detection-retail-0013/FP16/person-detection-retail-0013.xml -i /opt/intel/openvino/deployment_tools/demo/car.png
[Step 1/11] Parsing and validating input arguments
[ INFO ] Parsing input parameters
[ INFO ] Files were added: 1
[ INFO ] /opt/intel/openvino/deployment_tools/demo/car.png
[Step 2/11] Loading Inference Engine
[ INFO ] InferenceEngine:
API version ............ 2.1
Build .................. 2020.4.0-359-21e092122f4-releases/2020/4
Description ....... API
[ INFO ] Device info:
CPU
MKLDNNPlugin version ......... 2.1
Build ........... 2020.4.0-359-21e092122f4-releases/2020/4

[Step 3/11] Setting device configuration
[ WARNING ] -nstreams default value is determined automatically for CPU device. Although the automatic selection usually provides a reasonable performance,but it still may be non-optimal for some cases, for more information look at README.
[Step 4/11] Reading the Intermediate Representation network
[ INFO ] Loading network files
[ INFO ] Read network took 236.99 ms
[Step 5/11] Resizing network to match image sizes and given batch
[ INFO ] Network batch size: 1
[Step 6/11] Configuring input of the model
[Step 7/11] Loading the model to the device
[ INFO ] Load network took 1522.07 ms
[Step 8/11] Setting optimal runtime parameters
[Step 9/11] Creating infer requests and filling input blobs with images
[ INFO ] Network input 'data' precision U8, dimensions (NCHW): 1 3 320 544
[ WARNING ] Some image input files will be duplicated: 4 files are required but only 1 are provided
[ INFO ] Infer Request 0 filling
[ INFO ] Prepare image /opt/intel/openvino/deployment_tools/demo/car.png
[ WARNING ] Image is resized from (787, 259) to (544, 320)
[ INFO ] Infer Request 1 filling
[ INFO ] Prepare image /opt/intel/openvino/deployment_tools/demo/car.png
[ WARNING ] Image is resized from (787, 259) to (544, 320)
[ INFO ] Infer Request 2 filling
[ INFO ] Prepare image /opt/intel/openvino/deployment_tools/demo/car.png
[ WARNING ] Image is resized from (787, 259) to (544, 320)
[ INFO ] Infer Request 3 filling
[ INFO ] Prepare image /opt/intel/openvino/deployment_tools/demo/car.png
[ WARNING ] Image is resized from (787, 259) to (544, 320)
[Step 10/11] Measuring performance (Start inference asyncronously, 4 inference requests using 4 streams for CPU, limits: 60000 ms duration)

 

Kindly advise,

Tham

0 Kudos
1 Solution
Iffa_Intel
Moderator
579 Views

Greetings,


This is the full documentation of the Multi-Device plugin: https://docs.openvinotoolkit.org/latest/openvino_docs_IE_DG_supported_plugins_MULTI.html


and this is how you can use it correctly, including things that you need to check: https://www.youtube.com/watch?v=xbORYFEmrqU



Sincerely,

Iffa


View solution in original post

0 Kudos
2 Replies
Iffa_Intel
Moderator
580 Views

Greetings,


This is the full documentation of the Multi-Device plugin: https://docs.openvinotoolkit.org/latest/openvino_docs_IE_DG_supported_plugins_MULTI.html


and this is how you can use it correctly, including things that you need to check: https://www.youtube.com/watch?v=xbORYFEmrqU



Sincerely,

Iffa


0 Kudos
Iffa_Intel
Moderator
568 Views

Greetings,


Intel will no longer monitor this thread since this issue has been resolved. If you need any additional information from Intel, please submit a new question.


Sincerely,

Iffa


0 Kudos
Reply