Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Mask_rcnn_inception_v2_coco cannot be executed in HDDL

taka
Beginner
2,051 Views

When Mask_rcnn_inception_v2_coco is executed in HDDL in the following environment, No results are returned.

CPU: Atom x7-E3950
VPU: AEEON AI Core XM 2280
OS: Ubuntu
OpenVINO 2020

---------------------------------------------------------------
./benchmark_app -m mask.xml -d HDDL

[Step 1/11] Parsing and validating input arguments
[ INFO ] Parsing input parameters
[ WARNING ] -nstreams default value is determined automatically for a device. Although the automatic selection usually provides a reasonable performance,but it still may be non-optimal for some cases, for more information look at README.

[Step 2/11] Loading Inference Engine
[ INFO ] InferenceEngine:
        API version ............ 2.1
        Build .................. 37988
        Description ....... API
[ INFO ] Device info:
        HDDL
        HDDLPlugin version ......... 2.1
        Build ........... custom_releases/2020/1_26d55a5f6c20c6b5578c63933b98c2faee462a48

[Step 3/11] Setting device configuration
[Step 4/11] Reading the Intermediate Representation network
[ INFO ] Loading network files
[ INFO ] Read network took 53.14 ms
[Step 5/11] Resizing network to match image sizes and given batch
[ INFO ] Network batch size: 1, precision: MIXED
[Step 6/11] Configuring input of the model
[Step 7/11] Loading the model to the device
[ INFO ] Load network took 386596.62 ms
[Step 8/11] Setting optimal runtime parameters
[Step 9/11] Creating infer requests and filling input blobs with images
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(15)
ion alloc failed size=8102832
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(15)
ion alloc failed size=8102832
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(15)
ion alloc failed size=8102832
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(15)
ion alloc failed size=8102832
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(15)
ion alloc failed size=8102832
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(15)
ion alloc failed size=8102832
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(15)
ion alloc failed size=8102832
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(15)
ion alloc failed size=8102832

--------------------------------------------------------------

Execution other than Mask_rcnn_inception_v2 is possible.

In Windows, corei7 8700, AI Core XP4 , it is possible to execute Mask_rcnn_inception_v2_coco.


The model before conversion is obtained from the following and IR converted.
https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md

Best Regards,
 

0 Kudos
22 Replies
Max_L_Intel
Moderator
1,701 Views

Hi Taka.

Please let us know the following:

1) Have you performed these additional steps for HDDL from here - https://docs.openvinotoolkit.org/latest/_docs_install_guides_installing_openvino_windows.html#hddl-myriad

2) Do you see the same "operation not permitted" messages when executing this command on CPU device (-d CPU) on the same machine?

0 Kudos
taka
Beginner
1,701 Views

1) No. It runs on ubuntu, not windows10.  I am following the steps for linux.

 

2) No message for CPU device (-d CPU). The result is output normally.

      Googlenetv1 etc. works on the HDDL device

     If you run mask_rcnn_inception_v2 on an HDDL device, you will receive a message after performing loadplugin for at least 10 minutes.

Does Atom x7-E3950 take a long time to loadplugin and time out somewhere?

 

Extract ”/var/log/daemon.log”
Again, the error occurs only in mask_rcnn_inception_v2.

Best Regards,

 

 

   

0 Kudos
Max_L_Intel
Moderator
1,701 Views

Hi Taka.

1) I'm sorry, I meant the one for Linux from here - https://docs.openvinotoolkit.org/latest/_docs_install_guides_installing_openvino_linux_ivad_vpu.html
Have you done that? 

2) I've tested benchmark_app with a frozen Mask_rcnn_inception_v2_coco model on my end on HDDL Mustang-V100-MX8 device. Although I see the same "operation not permitted" messages, it still provides the results report (please see it below) in approx 4 minutes after the command being executed. So as you mentioned, do you see the results report from your side in 10 minutes?

The model we used is http://download.tensorflow.org/models/object_detection/mask_rcnn_inception_v2_coco_2018_01_28.tar.gz
Converted as per https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_Object_Detection_API_Models.html#how_to_convert_a_model

3) Unfortunately we don't have inference performance benchmarking information for Atom E3900, but you could use this resource as a reference - https://docs.openvinotoolkit.org/latest/_docs_performance_benchmarks.html 

/opt/intel/openvino_2020.1.023/deployment_tools/tools/benchmark_tool$ python3 benchmark_app.py -m ~/Downloads/frozen_inference_graph.xml -d HDDL
[Step 1/11] Parsing and validating input arguments
[ WARNING ]  -nstreams default value is determined automatically for a device. Although the automatic selection usually provides a reasonable performance, but it still may be non-optimal for some cases, for more information look at README. 
[Step 2/11] Loading Inference Engine
[ INFO ] InferenceEngine:
         API version............. 2.1.37988
[ INFO ] Device info
         HDDL
         HDDLPlugin.............. version 2.1
         Build................... custom_releases/2020/1_26d55a5f6c20c6b5578c63933b98c2faee462a48

[Step 3/11] Reading the Intermediate Representation network
[Step 4/11] Resizing network to match image sizes and given batch
[ INFO ] Network batch size: 1, precision: MIXED
[Step 5/11] Configuring input of the model
[Step 6/11] Setting device configuration
[Step 7/11] Loading the model to the device
[14:35:14.3886][4652]I[ClientManager.cpp:159] client(id:4) registered: clientName=HDDLPlugin socket=2
[14:37:07.9641][4653]I[GraphManager.cpp:491] Load graph success, graphId=4 graphName=Function_4
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(15)
ion alloc failed size=16205680
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(15)
ion alloc failed size=16205680
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(15)
ion alloc failed size=16205680
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(15)
ion alloc failed size=16205680
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(15)
ion alloc failed size=16205680
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(15)
ion alloc failed size=16205680
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(15)
ion alloc failed size=16205680
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(15)
ion alloc failed size=16205680
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(15)
ion alloc failed size=16205680
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(15)
ion alloc failed size=16205680
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(15)
ion alloc failed size=16205680
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(15)
ion alloc failed size=16205680
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(15)
ion alloc failed size=16205680
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(15)
ion alloc failed size=16205680
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(15)
ion alloc failed size=16205680
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(15)
ion alloc failed size=16205680
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(15)
ion alloc failed size=16205680
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(15)
ion alloc failed size=16205680
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(15)
ion alloc failed size=16205680
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(15)
ion alloc failed size=16205680
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(15)
ion alloc failed size=16205680
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(15)
ion alloc failed size=16205680
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(15)
ion alloc failed size=16205680
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(15)
ion alloc failed size=16205680
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(15)
ion alloc failed size=16205680
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(15)
ion alloc failed size=16205680
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(15)
ion alloc failed size=16205680
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(15)
ion alloc failed size=16205680
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(15)
ion alloc failed size=16205680
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(15)
ion alloc failed size=16205680
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(15)
ion alloc failed size=16205680
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(15)
ion alloc failed size=16205680
[Step 8/11] Setting optimal runtime parameters
[Step 9/11] Creating infer requests and filling input blobs with images
[ INFO ] Network input 'image_info' precision FP32, dimensions (NC): 1 3
[ INFO ] Network input 'image_tensor' precision U8, dimensions (NCHW): 1 3 800 800
[ WARNING ] No input files were given: all inputs will be filled with random values!
[ INFO ] Infer Request 0 filling
[ INFO ] Fill input 'image_info' with image size 800x800
[ INFO ] Fill input 'image_tensor' with random values (image is expected)
[ INFO ] Infer Request 1 filling
[ INFO ] Fill input 'image_info' with image size 800x800
[ INFO ] Fill input 'image_tensor' with random values (image is expected)
[ INFO ] Infer Request 2 filling
[ INFO ] Fill input 'image_info' with image size 800x800
[ INFO ] Fill input 'image_tensor' with random values (image is expected)
[ INFO ] Infer Request 3 filling
[ INFO ] Fill input 'image_info' with image size 800x800
[ INFO ] Fill input 'image_tensor' with random values (image is expected)
[ INFO ] Infer Request 4 filling
[ INFO ] Fill input 'image_info' with image size 800x800
[ INFO ] Fill input 'image_tensor' with random values (image is expected)
[ INFO ] Infer Request 5 filling
[ INFO ] Fill input 'image_info' with image size 800x800
[ INFO ] Fill input 'image_tensor' with random values (image is expected)
[ INFO ] Infer Request 6 filling
[ INFO ] Fill input 'image_info' with image size 800x800
[ INFO ] Fill input 'image_tensor' with random values (image is expected)
[ INFO ] Infer Request 7 filling
[ INFO ] Fill input 'image_info' with image size 800x800
[ INFO ] Fill input 'image_tensor' with random values (image is expected)
[ INFO ] Infer Request 8 filling
[ INFO ] Fill input 'image_info' with image size 800x800
[ INFO ] Fill input 'image_tensor' with random values (image is expected)
[ INFO ] Infer Request 9 filling
[ INFO ] Fill input 'image_info' with image size 800x800
[ INFO ] Fill input 'image_tensor' with random values (image is expected)
[ INFO ] Infer Request 10 filling
[ INFO ] Fill input 'image_info' with image size 800x800
[ INFO ] Fill input 'image_tensor' with random values (image is expected)
[ INFO ] Infer Request 11 filling
[ INFO ] Fill input 'image_info' with image size 800x800
[ INFO ] Fill input 'image_tensor' with random values (image is expected)
[ INFO ] Infer Request 12 filling
[ INFO ] Fill input 'image_info' with image size 800x800
[ INFO ] Fill input 'image_tensor' with random values (image is expected)
[ INFO ] Infer Request 13 filling
[ INFO ] Fill input 'image_info' with image size 800x800
[ INFO ] Fill input 'image_tensor' with random values (image is expected)
[ INFO ] Infer Request 14 filling
[ INFO ] Fill input 'image_info' with image size 800x800
[ INFO ] Fill input 'image_tensor' with random values (image is expected)
[ INFO ] Infer Request 15 filling
[ INFO ] Fill input 'image_info' with image size 800x800
[ INFO ] Fill input 'image_tensor' with random values (image is expected)
[ INFO ] Infer Request 16 filling
[ INFO ] Fill input 'image_info' with image size 800x800
[ INFO ] Fill input 'image_tensor' with random values (image is expected)
[ INFO ] Infer Request 17 filling
[ INFO ] Fill input 'image_info' with image size 800x800
[ INFO ] Fill input 'image_tensor' with random values (image is expected)
[ INFO ] Infer Request 18 filling
[ INFO ] Fill input 'image_info' with image size 800x800
[ INFO ] Fill input 'image_tensor' with random values (image is expected)
[ INFO ] Infer Request 19 filling
[ INFO ] Fill input 'image_info' with image size 800x800
[ INFO ] Fill input 'image_tensor' with random values (image is expected)
[ INFO ] Infer Request 20 filling
[ INFO ] Fill input 'image_info' with image size 800x800
[ INFO ] Fill input 'image_tensor' with random values (image is expected)
[ INFO ] Infer Request 21 filling
[ INFO ] Fill input 'image_info' with image size 800x800
[ INFO ] Fill input 'image_tensor' with random values (image is expected)
[ INFO ] Infer Request 22 filling
[ INFO ] Fill input 'image_info' with image size 800x800
[ INFO ] Fill input 'image_tensor' with random values (image is expected)
[ INFO ] Infer Request 23 filling
[ INFO ] Fill input 'image_info' with image size 800x800
[ INFO ] Fill input 'image_tensor' with random values (image is expected)
[ INFO ] Infer Request 24 filling
[ INFO ] Fill input 'image_info' with image size 800x800
[ INFO ] Fill input 'image_tensor' with random values (image is expected)
[ INFO ] Infer Request 25 filling
[ INFO ] Fill input 'image_info' with image size 800x800
[ INFO ] Fill input 'image_tensor' with random values (image is expected)
[ INFO ] Infer Request 26 filling
[ INFO ] Fill input 'image_info' with image size 800x800
[ INFO ] Fill input 'image_tensor' with random values (image is expected)
[ INFO ] Infer Request 27 filling
[ INFO ] Fill input 'image_info' with image size 800x800
[ INFO ] Fill input 'image_tensor' with random values (image is expected)
[ INFO ] Infer Request 28 filling
[ INFO ] Fill input 'image_info' with image size 800x800
[ INFO ] Fill input 'image_tensor' with random values (image is expected)
[ INFO ] Infer Request 29 filling
[ INFO ] Fill input 'image_info' with image size 800x800
[ INFO ] Fill input 'image_tensor' with random values (image is expected)
[ INFO ] Infer Request 30 filling
[ INFO ] Fill input 'image_info' with image size 800x800
[ INFO ] Fill input 'image_tensor' with random values (image is expected)
[ INFO ] Infer Request 31 filling
[ INFO ] Fill input 'image_info' with image size 800x800
[ INFO ] Fill input 'image_tensor' with random values (image is expected)
[Step 10/11] Measuring performance (Start inference asyncronously, 32 inference requests, limits: 60000 ms duration)
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(3)
ion alloc failed size=16205696
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(3)
ion alloc failed size=16205696
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(3)
ion alloc failed size=16205696
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(3)
ion alloc failed size=16205696
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(3)
ion alloc failed size=16205696
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(3)
ion alloc failed size=16205696
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(3)
ion alloc failed size=16205696
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(3)
ion alloc failed size=16205696
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(3)
ion alloc failed size=16205696
[Step 11/11] Dumping statistics report
Count:      352 iterations
Duration:   73426.81 ms
Latency:    6353.97 ms
Throughput: 4.79 FPS
[14:38:22.0767][4652]I[ClientManager.cpp:189] client(id:4) unregistered: clientName=HDDLPlugin socket=2

0 Kudos
taka
Beginner
1,701 Views

Hi Max

>1) I'm sorry, I meant the one for Linux from here - https://docs.openvinotoolkit.org/latest/_docs_install_guides_installing_...
Have you done that?

⇒Yes 

>2) So as you mentioned, do you see the results report from your side in 10 minutes?

⇒Yes

 Another environment "Windows/corei7 8700/AI Core XP4"  outputs the result in 2-3 minutes.

On both Atom and corei, one core of CPU sticks to 100% during Loadplugin.

I think that 2-3minitus of corei and 10-15minitsu of atom are the difference of CPU ability.

Atom x7-E3950 may time out in the process of loadplugin?

Best Regards,

 

0 Kudos
Max_L_Intel
Moderator
1,701 Views

Hi Taka.

Yes, this might be the case for Atom x7-E3950 since the application goes thru multiple iterations in order to measure the performance, so for Core i7 it should demonstrate higher performance and give the output faster.

And we don't expect any timeouts during the benchmark_app execution for any device.

Hope this helps.
Thanks.

Best regards, Max.

0 Kudos
taka
Beginner
1,701 Views

Hi Max

I'm sorry.

>2) So as you mentioned, do you see the results report from your side in 10 minutes?

⇒No

Atom x7-E3950 & HDDL returns no results report.

HDDL plugin load fails.

Goolenet v1 etc. can be executed on HDDL.

 

 

Best Regards,

 

 

 

0 Kudos
Max_L_Intel
Moderator
1,701 Views

Hi Taka.

Unfortunately we don't have AI Core XM 2280 device for testing it on our end.
Can you please try to replicate this case that worked for me with the same IR model and same benchmark app version?

1) I've sent you via PM the converted Mask_rcnn_inception_v2_coco IR model that we used.

2) Use a python version of benchmark_app tool from openvino/deployment_tools/tools/benchmark_tool

python3 benchmark_app.py -m ~/Downloads/frozen_inference_graph.xml -d HDDL

 

0 Kudos
taka
Beginner
1,701 Views

Hi Max

benchmark_app returns no results. (hangs at step10)
Also, I tried mask_rcnn_demo but got an error. ( InferTaskSync result timeout failed.)

To summarize the situation so far
-In Atom x7-E3950 & AEEON AI Core XM 2280 environment, operation of mask_rcnn fails in HDDL.
(However, models such as googlenet_v1 work with HDDL)

-In corei7 8700 & AI Core XP4 environment, the operation of mask_rcnn succeeded in HDDL.

 

I think that running mask_rcnn on Atom & HDDL will cause problems.
Can you check it in Atom environment?
If the problem reappears, could the development team issue it?

 

----------------benchmark_app.py------------------

root@9a8f10b19908:/opt/intel/openvino/deployment_tools/tools/benchmark_tool# python3 benchmark_app.py -m frozen_inference_graph.xml -d HDDL
[Step 1/11] Parsing and validating input arguments
[ WARNING ]  -nstreams default value is determined automatically for a device. Although the automatic selection usually provides a reasonable performance, but it still may be non-optimal for some cases, for more information look at README.
[Step 2/11] Loading Inference Engine
[ INFO ] InferenceEngine:
         API version............. 2.1.37988
[ INFO ] Device info
         HDDL
         HDDLPlugin.............. version 2.1
         Build................... custom_releases/2020/1_26d55a5f6c20c6b5578c63933b98c2faee462a48

[Step 3/11] Reading the Intermediate Representation network
[Step 4/11] Resizing network to match image sizes and given batch
[ INFO ] Network batch size: 1, precision: MIXED
[Step 5/11] Configuring input of the model
[Step 6/11] Setting device configuration
[Step 7/11] Loading the model to the device
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(15)
ion alloc failed size=16205680
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(15)
ion alloc failed size=16205680
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(15)
ion alloc failed size=16205680
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(15)
ion alloc failed size=16205680
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(15)
ion alloc failed size=16205680
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(15)
ion alloc failed size=16205680
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(15)
ion alloc failed size=16205680
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(15)
ion alloc failed size=16205680
[Step 8/11] Setting optimal runtime parameters
[Step 9/11] Creating infer requests and filling input blobs with images
[ INFO ] Network input 'image_info' precision FP32, dimensions (NC): 1 3
[ INFO ] Network input 'image_tensor' precision U8, dimensions (NCHW): 1 3 800 800
[ WARNING ] No input files were given: all inputs will be filled with random values!
[ INFO ] Infer Request 0 filling
[ INFO ] Fill input 'image_info' with image size 800x800
[ INFO ] Fill input 'image_tensor' with random values (image is expected)
[ INFO ] Infer Request 1 filling
[ INFO ] Fill input 'image_info' with image size 800x800
[ INFO ] Fill input 'image_tensor' with random values (image is expected)
[ INFO ] Infer Request 2 filling
[ INFO ] Fill input 'image_info' with image size 800x800
[ INFO ] Fill input 'image_tensor' with random values (image is expected)
[ INFO ] Infer Request 3 filling
[ INFO ] Fill input 'image_info' with image size 800x800
[ INFO ] Fill input 'image_tensor' with random values (image is expected)
[ INFO ] Infer Request 4 filling
[ INFO ] Fill input 'image_info' with image size 800x800
[ INFO ] Fill input 'image_tensor' with random values (image is expected)
[ INFO ] Infer Request 5 filling
[ INFO ] Fill input 'image_info' with image size 800x800
[ INFO ] Fill input 'image_tensor' with random values (image is expected)
[ INFO ] Infer Request 6 filling
[ INFO ] Fill input 'image_info' with image size 800x800
[ INFO ] Fill input 'image_tensor' with random values (image is expected)
[ INFO ] Infer Request 7 filling
[ INFO ] Fill input 'image_info' with image size 800x800
[ INFO ] Fill input 'image_tensor' with random values (image is expected)
[Step 10/11] Measuring performance (Start inference asyncronously, 8 inference requests, limits: 60000 ms duration)

⇒hung-up

----------------------mask_rcnn_demo-----------------------

root@9a8f10b19908:~/omz_demos_build/intel64/Release# ./mask_rcnn_demo -i airliner.jpg -m frozen_inference_graph.xml -
d HDDL
InferenceEngine: 0x7f2baf5a2040
[ INFO ] Parsing input parameters
[ INFO ] Files were added: 1
[ INFO ]     airliner.jpg
[ INFO ] Loading Inference Engine
[ INFO ] Device info:
        HDDL
        HDDLPlugin version ......... 2.1
        Build ........... custom_releases/2020/1_26d55a5f6c20c6b5578c63933b98c2faee462a48
[ INFO ] Loading network files
[ INFO ] Preparing input blobs
[ INFO ] Network batch size is 1
[ INFO ] Prepare image airliner.jpg
[ INFO ] Preparing output blobs
[ INFO ] Loading model to the device
[ INFO ] Create infer request
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(15)
ion alloc failed size=16205680
[ INFO ] Setting input data to the blobs
[ INFO ] Start inference
[HDDLPlugin] [23:58:17.6527][2716]ERROR[HddlClient.cpp:764] Error: wait InferTaskSync(reqSeqNo=3 taskId=1) result timeout failed.
[ ERROR ] _client->inferTaskSync(_graph, inferData) failed: HDDL_GENERAL_ERROR


------------------------------------------------------------

 

 


 

 

0 Kudos
Max_L_Intel
Moderator
1,701 Views

Hi Taka.

We were able to obtain one AAEON AI Core XM 2280 device for testing and did run the above application and model using HDDL plugin. So we are still not able to replicate the issue you see, hence we cannot escalate this further to developers.

1) Have you tried to test XM 2280 in some other environment rather than Atom x7-E3950 based?

2) What Ubuntu distribution and kernel version are you using?

Ubuntu 18.04 and kernel versions 5.2 and below are only supported for HDDL.
We tested this on 4.15.0-88 kernel.

We also found similar issue reported with HDDL plugin, but the user there fixed it - https://software.intel.com/en-us/node/814070

je@HC:~/intel/openvino_2020.1.023/deployment_tools/tools/benchmark_tool$ python3 benchmark_app.py -m ~/Mask_rcnn_inception_v2_coco_IR/frozen_inference_graph.xml -d HDDL
[Step 1/11] Parsing and validating input arguments
[ WARNING ]  -nstreams default value is determined automatically for a device. Although the automatic selection usually provides a reasonable performance, but it still may be non-optimal for some cases, for more information look at README. 
[Step 2/11] Loading Inference Engine
[ INFO ] InferenceEngine:
         API version............. 2.1.37988
[ INFO ] Device info
         HDDL
         HDDLPlugin.............. version 2.1
         Build................... custom_releases/2020/1_26d55a5f6c20c6b5578c63933b98c2faee462a48

[Step 3/11] Reading the Intermediate Representation network
[Step 4/11] Resizing network to match image sizes and given batch
[ INFO ] Network batch size: 1, precision: MIXED
[Step 5/11] Configuring input of the model
[Step 6/11] Setting device configuration
[Step 7/11] Loading the model to the device
[10:35:53.1365][4258]I[main.cpp:243] ## HDDL_INSTALL_DIR: /home/jesus/intel/openvino_2020.1.023/deployment_tools/inference_engine/external/hddl
[10:35:53.1366][4258]I[main.cpp:245] Config file '/home/jesus/intel/openvino_2020.1.023/deployment_tools/inference_engine/external/hddl/config/hddl_service.config' has been loaded
[10:35:53.1369][4258]I[FileHelper.cpp:272] Set file:/var/tmp/hddl_service_alive.mutex owner: user-'no_change', group-'users', mode-'0660'
[10:35:53.1370][4258]I[FileHelper.cpp:272] Set file:/var/tmp/hddl_service_ready.mutex owner: user-'no_change', group-'users', mode-'0660'
[10:35:53.1371][4258]I[FileHelper.cpp:272] Set file:/var/tmp/hddl_start_exit.mutex owner: user-'no_change', group-'users', mode-'0660'
[10:35:53.1377][4258]I[AutobootStarter.cpp:156] Info: No running autoboot process. Start autoboot daemon...
[10:35:53.1457][4260]I[FileHelper.cpp:272] Set file:/var/tmp/hddl_autoboot_alive.mutex owner: user-'no_change', group-'users', mode-'0660'
[10:35:53.1457][4260]I[FileHelper.cpp:272] Set file:/var/tmp/hddl_autoboot_ready.mutex owner: user-'no_change', group-'users', mode-'0660'
[10:35:53.1458][4260]I[FileHelper.cpp:272] Set file:/var/tmp/hddl_autoboot_start_exit.mutex owner: user-'no_change', group-'users', mode-'0660'
[10:35:53.1458][4260]I[FileHelper.cpp:272] Set file:/tmp/hddl_autoboot_device.map owner: user-'no_change', group-'users', mode-'0660'
[10:35:53.1460][4260]I[AutoBoot.cpp:308] [Firmware Config] deviceName=default deviceNum=0 firmwarePath=/home/jesus/intel/openvino_2020.1.023/deployment_tools/inference_engine/external/hddl/lib/mvnc/usb-ma2x8x.mvcmd
[10:35:54.4443][4276]I[AutoBoot.cpp:197] Start boot device 5.1-ma2480
[10:35:54.6851][4276]I[AutoBoot.cpp:199] Device 5.1-ma2480 boot success, firmware=/home/jesus/intel/openvino_2020.1.023/deployment_tools/inference_engine/external/hddl/lib/mvnc/usb-ma2x8x.mvcmd
[10:35:54.6852][4276]I[AutoBoot.cpp:197] Start boot device 5.3-ma2480
[10:35:54.9224][4276]I[AutoBoot.cpp:199] Device 5.3-ma2480 boot success, firmware=/home/jesus/intel/openvino_2020.1.023/deployment_tools/inference_engine/external/hddl/lib/mvnc/usb-ma2x8x.mvcmd
[10:36:14.9243][4258]I[AutobootStarter.cpp:85] Info: Autoboot is running.
[10:36:14.9335][4258]W[ConfigParser.cpp:269] Warning: Cannot find key, path=scheduler_config.max_graph_per_device subclass=0, use default value: 1.
[10:36:14.9336][4258]W[ConfigParser.cpp:292] Warning: Cannot find key, path=scheduler_config.use_sgad_by_default subclass=0, use default value: false.
[10:36:14.9336][4258]I[DeviceSchedulerFactory.cpp:56] Info: ## DeviceSchedulerFacotry ## Created Squeeze Device-Scheduler2.
[10:36:14.9339][4258]I[DeviceManager.cpp:551] ## SqueezeScheduler created ##
[10:36:14.9339][4258]I[DeviceManager.cpp:649] times 0: try to create worker on device(6.3)
[10:36:16.9369][4258]I[DeviceManager.cpp:670] [SUCCESS] times 0: create worker on device(6.3)
[10:36:16.9369][4258]I[DeviceManager.cpp:719] worker(Wt6.3) created on device(6.3), type(0)
[10:36:16.9370][4258]I[DeviceManager.cpp:649] times 0: try to create worker on device(6.1)
[10:36:18.9402][4258]I[DeviceManager.cpp:670] [SUCCESS] times 0: create worker on device(6.1)
[10:36:18.9403][4258]I[DeviceManager.cpp:719] worker(Wt6.1) created on device(6.1), type(0)
[10:36:18.9403][4258]I[DeviceManager.cpp:145] DEVICE FOUND : 2
[10:36:18.9403][4258]I[DeviceManager.cpp:146] DEVICE OPENED : 2
[10:36:18.9404][4258]I[DeviceManagerCreator.cpp:81] New device manager(DeviceManager0) created with subclass(0), deviceCount(2)
[10:36:18.9466][4258]I[TaskSchedulerFactory.cpp:45] Info: ## TaskSchedulerFactory ## Created Polling Task-Scheduler.
[10:36:18.9471][4258]I[FileHelper.cpp:272] Set file:/var/tmp/hddl_snapshot.sock owner: user-'no_change', group-'users', mode-'0660'
[10:36:18.9476][4258]I[FileHelper.cpp:272] Set file:/var/tmp/hddl_service.sock owner: user-'no_change', group-'users', mode-'0660'
[10:36:18.9477][4258]I[MessageDispatcher.cpp:87] Message Dispatcher initialization finished
[10:36:18.9477][4258]I[main.cpp:103] SERVICE IS READY ...
[10:36:19.0501][4376]I[ClientManager.cpp:159] client(id:1) registered: clientName=HDDLPlugin socket=2
[10:37:58.1954][4377]I[GraphManager.cpp:491] Load graph success, graphId=1 graphName=Function_4
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(15)
ion alloc failed size=16205680
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(15)
ion alloc failed size=16205680
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(15)
ion alloc failed size=16205680
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(15)
ion alloc failed size=16205680
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(15)
ion alloc failed size=16205680
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(15)
ion alloc failed size=16205680
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(15)
ion alloc failed size=16205680
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(15)
ion alloc failed size=16205680
[Step 8/11] Setting optimal runtime parameters
[Step 9/11] Creating infer requests and filling input blobs with images
[ INFO ] Network input 'image_info' precision FP32, dimensions (NC): 1 3
[ INFO ] Network input 'image_tensor' precision U8, dimensions (NCHW): 1 3 800 800
[ WARNING ] No input files were given: all inputs will be filled with random values!
[ INFO ] Infer Request 0 filling
[ INFO ] Fill input 'image_info' with image size 800x800
[ INFO ] Fill input 'image_tensor' with random values (image is expected)
[ INFO ] Infer Request 1 filling
[ INFO ] Fill input 'image_info' with image size 800x800
[ INFO ] Fill input 'image_tensor' with random values (image is expected)
[ INFO ] Infer Request 2 filling
[ INFO ] Fill input 'image_info' with image size 800x800
[ INFO ] Fill input 'image_tensor' with random values (image is expected)
[ INFO ] Infer Request 3 filling
[ INFO ] Fill input 'image_info' with image size 800x800
[ INFO ] Fill input 'image_tensor' with random values (image is expected)
[ INFO ] Infer Request 4 filling
[ INFO ] Fill input 'image_info' with image size 800x800
[ INFO ] Fill input 'image_tensor' with random values (image is expected)
[ INFO ] Infer Request 5 filling
[ INFO ] Fill input 'image_info' with image size 800x800
[ INFO ] Fill input 'image_tensor' with random values (image is expected)
[ INFO ] Infer Request 6 filling
[ INFO ] Fill input 'image_info' with image size 800x800
[ INFO ] Fill input 'image_tensor' with random values (image is expected)
[ INFO ] Infer Request 7 filling
[ INFO ] Fill input 'image_info' with image size 800x800
[ INFO ] Fill input 'image_tensor' with random values (image is expected)
[Step 10/11] Measuring performance (Start inference asyncronously, 8 inference requests, limits: 60000 ms duration)
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(3)
ion alloc failed size=16205696
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(3)
ion alloc failed size=16205696
[Step 11/11] Dumping statistics report
Count:      88 iterations
Duration:   72898.67 ms
Latency:    6360.73 ms
Throughput: 1.21 FPS
[10:39:11.2954][4376]I[ClientManager.cpp:189] client(id:1) unregistered: clientName=HDDLPlugin socket=2
je@HC:~/intel/openvino_2020.1.023/deployment_tools/tools/benchmark_tool$ [10:39:11.4656][4377]I[GraphManager.cpp:539] graph(1) destroyed

0 Kudos
taka
Beginner
1,701 Views

Hi Max

 

>1) Have you tried to test XM 2280 in some other environment rather than Atom x7-E3950 based?

⇒ YES. There is no problem in the environment of core i7 8700 & AI Core XP4

  AI Core XP4 has 2 slots for AI Core XM 2280.

  https://www.aaeon.com/jp/p/ai-edge-computing-board-myriad-x-ai-core-xp4-xp8

 

>2) What Ubuntu distribution and kernel version are you using?

⇒ Ubuntu 18.04.3 LTS  4.14.67-intel-pk-standard

 

I will ask you again

Could you confirm with Atom x7-E3950?

 

Best Regards,

 

0 Kudos
Max_L_Intel
Moderator
1,701 Views

Hi Taka.

Yes, we also tested it with Atom x7-E3950 based device using CPU plugin, so that case works.

Unfortunately, we have no way to test HDDL XM 2280 in Atom x7-E3950 environment, since our Atom device has no 2280 M.2 port. But we see that both these devices work correctly in separate.

Looking at the kernel version you use, please specify is that Yocto project build? If so, we think this might be a possible rootcause. Since we've also tested XM 2280 on 4.14.67-generic kernel from https://kernel.ubuntu.com/~kernel-ppa/mainline/v4.14.67/, and that works fine too.
You have a chance to try your HDDL on generic kernel? After updating the kernel please do not forget to re-run prerequisites from here - https://docs.openvinotoolkit.org/latest/_docs_install_guides_installing_openvino_linux_ivad_vpu.html

In your daemon.log attached we also see the initial error as "Error: shm_open() failed: errno=2 (No such file or directory)", so there might be permissions as another possible rootcause.

0 Kudos
taka
Beginner
1,701 Views

Hi, MAX

I tested it with 4.14.67 kernel, but there is no change.

 

> But we see that both these devices work correctly in separate.

It has been confirmed here. Problem occurs with combination of Atom E3950 and HDDL.

              MASK-RCNN      Other than MASK RCNN

Atom E3950(CPU plugin)           〇                                 〇

Atom E3950(HDDL plugin)         ×                                  〇

Core i7 8700(CPU plugin)           〇                                 〇

Core i7 8700(HDDL plugin)         〇                                 〇

I will add one point.
Atom-x7 E3950 CPU clock is limited to base clock.(1.6GHz)

So, if you have a chance to try it, turn off "turbo boost" in BIOS.

For your reference、lscpu command result is below.

-----------------------------------------

rchitecture:        x86_64
CPU op-mode(s):      32-bit, 64-bit
Byte Order:          Little Endian
CPU(s):              4
On-line CPU(s) list: 0-3
Thread(s) per core:  1
Core(s) per socket:  4
Socket(s):           1
Vendor ID:           GenuineIntel
CPU family:          6
Model:               92
Model name:          Intel(R) Atom(TM) Processor E3950 @ 1.60GHz
Stepping:            10
CPU MHz:             1446.587
CPU max MHz:         1600.0000
CPU min MHz:         800.0000
BogoMIPS:            3187.20
Virtualization:      VT-x
L1d cache:           24K
L1i cache:           32K
L2 cache:            1024K
Flags:               fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 ds_cpl vmx est tm2 ssse3 sdbg cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave rdrand lahf_lm 3dnowprefetch cpuid_fault cat_l2 ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust smep erms mpx rdt_a rdseed smap clflushopt intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves dtherm arat pln pts arch_capabilities

----------------------------------------

Best Regards,

0 Kudos
Max_L_Intel
Moderator
1,701 Views

Hi Taka.

Yes, E3950 base frequency is 1.60 GHz as per https://ark.intel.com/content/www/us/en/ark/products/96488/intel-atom-x7-e3950-processor-2m-cache-up-to-2-00-ghz.html

We tried CPU plugin on Atom-x7 E3950 with Turbo Mode disabled, and Mask_rcnn works on it too. Here's the result:

Count: 16 iterations 
Duration: 139963.04 ms
Latency: 27749.60 ms
Throughput: 0.11 FPS

0 Kudos
taka
Beginner
1,701 Views

Hi, MAX

 

Yes. Even in my environment from the beginning, Mask-rcnn work on Atom x7-E3950 cpu plugin.

Throughput: 0.11 FPS

 

Problem occurs with combination of Atom E3950 and HDDL.

Please report the result of running with HDDL plugin on Atom x7-E3950.

 

The following is the confirmation status in my environment.

--------------------------------------------------------------------------

              MASK-RCNN      Other than MASK RCNN

Atom E3950(CPU plugin)              〇                                  〇

Atom E3950(HDDL plugin)            ×                                   〇

Atom E3950(MYRIAD plugin)        〇                                  〇

Atom E3950(GPU plugin)              〇                                  〇

Core i7 8700(CPU plugin)              〇                                 〇

Core i7 8700(HDDL plugin)            〇                                 〇

Core i7 8700(MYRIAD plugin)        〇                                  〇

Core i7 8700(GPU plugin)              〇                                  〇

 

*〇 has confirmed that it works, so you don't need to check

--------------------------------------------------------------------------

Best Regards,

0 Kudos
Max_L_Intel
Moderator
1,701 Views

Hi Taka.

We've tested HDDL plugin in the Atom E3950 environment, but unfortunately still not able to replicate the issue you observed.

We recommend you to try a fresh installation of Ubuntu 18.04.3 LTS release from here http://old-releases.ubuntu.com/releases/bionic/
And try that once again with OpenVINO toolkit 2020.1 build installation and configuration and the converted model I sent you on March 2.

user@user-UP-APL01:/opt/intel/openvino_2020.1.023/deployment_tools/tools/benchmark_tool$ python3 benchmark_app.py -m ~/Downloads/frozen_inference_graph.xml -d HDDL
[Step 1/11] Parsing and validating input arguments
[ WARNING ]  -nstreams default value is determined automatically for a device. Although the automatic selection usually provides a reasonable performance, but it still may be non-optimal for some cases, for more information look at README. 
[Step 2/11] Loading Inference Engine
[ INFO ] InferenceEngine:
         API version............. 2.1.37988
[ INFO ] Device info
         HDDL
         HDDLPlugin.............. version 2.1
         Build................... custom_releases/2020/1_26d55a5f6c20c6b5578c63933b98c2faee462a48

[Step 3/11] Reading the Intermediate Representation network
[Step 4/11] Resizing network to match image sizes and given batch
[ INFO ] Network batch size: 1, precision: MIXED
[Step 5/11] Configuring input of the model
[Step 6/11] Setting device configuration
[Step 7/11] Loading the model to the device
[18:24:51.1550][5007]I[main.cpp:243] ## HDDL_INSTALL_DIR: /opt/intel/openvino_2020.1.023/deployment_tools/inference_engine/external/hddl
[18:24:51.1552][5007]I[main.cpp:245] Config file '/opt/intel/openvino_2020.1.023/deployment_tools/inference_engine/external/hddl/config/hddl_service.config' has been loaded
[18:24:51.1562][5007]I[FileHelper.cpp:272] Set file:/var/tmp/hddl_service_alive.mutex owner: user-'no_change', group-'users', mode-'0660'
[18:24:51.1563][5007]I[FileHelper.cpp:272] Set file:/var/tmp/hddl_service_ready.mutex owner: user-'no_change', group-'users', mode-'0660'
[18:24:51.1564][5007]I[FileHelper.cpp:272] Set file:/var/tmp/hddl_start_exit.mutex owner: user-'no_change', group-'users', mode-'0660'
[18:24:51.1584][5007]I[AutobootStarter.cpp:156] Info: No running autoboot process. Start autoboot daemon...
[18:24:51.1842][5009]I[FileHelper.cpp:272] Set file:/var/tmp/hddl_autoboot_alive.mutex owner: user-'no_change', group-'users', mode-'0660'
[18:24:51.1845][5009]I[FileHelper.cpp:272] Set file:/var/tmp/hddl_autoboot_ready.mutex owner: user-'no_change', group-'users', mode-'0660'
[18:24:51.1846][5009]I[FileHelper.cpp:272] Set file:/var/tmp/hddl_autoboot_start_exit.mutex owner: user-'no_change', group-'users', mode-'0660'
[18:24:51.1847][5009]I[FileHelper.cpp:272] Set file:/tmp/hddl_autoboot_device.map owner: user-'no_change', group-'users', mode-'0660'
[18:24:51.1851][5009]I[AutoBoot.cpp:308] [Firmware Config] deviceName=default deviceNum=0 firmwarePath=/opt/intel/openvino_2020.1.023/deployment_tools/inference_engine/external/hddl/lib/mvnc/usb-ma2x8x.mvcmd
[18:24:52.4846][5018]I[AutoBoot.cpp:197] Start boot device 3.1-ma2480
[18:24:52.8120][5018]I[AutoBoot.cpp:199] Device 3.1-ma2480 boot success, firmware=/opt/intel/openvino_2020.1.023/deployment_tools/inference_engine/external/hddl/lib/mvnc/usb-ma2x8x.mvcmd
[18:24:52.8121][5018]I[AutoBoot.cpp:197] Start boot device 1.4-ma2480
[18:24:52.9917][5018]I[AutoBoot.cpp:199] Device 1.4-ma2480 boot success, firmware=/opt/intel/openvino_2020.1.023/deployment_tools/inference_engine/external/hddl/lib/mvnc/usb-ma2x8x.mvcmd
[18:25:12.9953][5007]I[AutobootStarter.cpp:85] Info: Autoboot is running.
[18:25:13.1521][5007]W[ConfigParser.cpp:269] Warning: Cannot find key, path=scheduler_config.max_graph_per_device subclass=0, use default value: 1.
[18:25:13.1522][5007]W[ConfigParser.cpp:292] Warning: Cannot find key, path=scheduler_config.use_sgad_by_default subclass=0, use default value: false.
[18:25:13.1523][5007]I[DeviceSchedulerFactory.cpp:56] Info: ## DeviceSchedulerFacotry ## Created Squeeze Device-Scheduler2.
[18:25:13.1533][5007]I[DeviceManager.cpp:551] ## SqueezeScheduler created ##
[18:25:13.1533][5007]I[DeviceManager.cpp:649] times 0: try to create worker on device(2.4)
[18:25:15.1580][5007]I[DeviceManager.cpp:670] [SUCCESS] times 0: create worker on device(2.4)
[18:25:15.1583][5007]I[DeviceManager.cpp:719] worker(Wt2.4) created on device(2.4), type(0)
[18:25:15.1584][5007]I[DeviceManager.cpp:649] times 0: try to create worker on device(4.1)
[18:25:17.1618][5007]I[DeviceManager.cpp:670] [SUCCESS] times 0: create worker on device(4.1)
[18:25:17.1620][5007]I[DeviceManager.cpp:719] worker(Wt4.1) created on device(4.1), type(0)
[18:25:17.1620][5007]I[DeviceManager.cpp:145] DEVICE FOUND : 2
[18:25:17.1621][5007]I[DeviceManager.cpp:146] DEVICE OPENED : 2
[18:25:17.1623][5007]I[DeviceManagerCreator.cpp:81] New device manager(DeviceManager0) created with subclass(0), deviceCount(2)
[18:25:17.3122][5007]I[TaskSchedulerFactory.cpp:45] Info: ## TaskSchedulerFactory ## Created Polling Task-Scheduler.
[18:25:17.3128][5007]I[FileHelper.cpp:272] Set file:/var/tmp/hddl_snapshot.sock owner: user-'no_change', group-'users', mode-'0660'
[18:25:17.3133][5007]I[FileHelper.cpp:272] Set file:/var/tmp/hddl_service.sock owner: user-'no_change', group-'users', mode-'0660'
[18:25:17.3135][5007]I[MessageDispatcher.cpp:87] Message Dispatcher initialization finished
[18:25:17.3135][5007]I[main.cpp:103] SERVICE IS READY ...
[18:25:17.3336][5052]I[ClientManager.cpp:159] client(id:1) registered: clientName=HDDLPlugin socket=2
[18:31:47.7504][5053]I[GraphManager.cpp:491] Load graph success, graphId=1 graphName=Function_4
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(15)
ion alloc failed size=16205680
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(15)
ion alloc failed size=16205680
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(15)
ion alloc failed size=16205680
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(15)
ion alloc failed size=16205680
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(15)
ion alloc failed size=16205680
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(15)
ion alloc failed size=16205680
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(15)
ion alloc failed size=16205680
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(15)
ion alloc failed size=16205680
[Step 8/11] Setting optimal runtime parameters
[Step 9/11] Creating infer requests and filling input blobs with images
[ INFO ] Network input 'image_info' precision FP32, dimensions (NC): 1 3
[ INFO ] Network input 'image_tensor' precision U8, dimensions (NCHW): 1 3 800 800
[ WARNING ] No input files were given: all inputs will be filled with random values!
[ INFO ] Infer Request 0 filling
[ INFO ] Fill input 'image_info' with image size 800x800
[ INFO ] Fill input 'image_tensor' with random values (image is expected)
[ INFO ] Infer Request 1 filling
[ INFO ] Fill input 'image_info' with image size 800x800
[ INFO ] Fill input 'image_tensor' with random values (image is expected)
[ INFO ] Infer Request 2 filling
[ INFO ] Fill input 'image_info' with image size 800x800
[ INFO ] Fill input 'image_tensor' with random values (image is expected)
[ INFO ] Infer Request 3 filling
[ INFO ] Fill input 'image_info' with image size 800x800
[ INFO ] Fill input 'image_tensor' with random values (image is expected)
[ INFO ] Infer Request 4 filling
[ INFO ] Fill input 'image_info' with image size 800x800
[ INFO ] Fill input 'image_tensor' with random values (image is expected)
[ INFO ] Infer Request 5 filling
[ INFO ] Fill input 'image_info' with image size 800x800
[ INFO ] Fill input 'image_tensor' with random values (image is expected)
[ INFO ] Infer Request 6 filling
[ INFO ] Fill input 'image_info' with image size 800x800
[ INFO ] Fill input 'image_tensor' with random values (image is expected)
[ INFO ] Infer Request 7 filling
[ INFO ] Fill input 'image_info' with image size 800x800
[ INFO ] Fill input 'image_tensor' with random values (image is expected)
[Step 10/11] Measuring performance (Start inference asyncronously, 8 inference requests, limits: 60000 ms duration)
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(3)
ion alloc failed size=16205696
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(3)
ion alloc failed size=16205696
[Step 11/11] Dumping statistics report
Count:      88 iterations
Duration:   72493.01 ms
Latency:    6184.78 ms
Throughput: 1.21 FPS

[18:33:00.7230][5052]I[ClientManager.cpp:189] client(id:1) unregistered: clientName=HDDLPlugin socket=2
[18:33:00.8964][5053]I[GraphManager.cpp:539] graph(1) destroyed
user@user-UP-APL01:/opt/intel/openvino_2020.1.023/deployment_tools/tools/benchmark_tool$ lscpu | grep Atom
Model name:          Intel(R) Atom(TM) Processor E3950 @ 1.60GHz
user@user-UP-APL01:/opt/intel/openvino_2020.1.023/deployment_tools/tools/benchmark_tool$ 
 

0 Kudos
taka
Beginner
1,701 Views

Hi, MAX


Thank you for testing HDDL plugin in the Atom E3950.

I will consider reinstalling Ubuntu 18.04.3 LTS.  (I'm sorry, but it's under development and can't be changed right now.)

Let me check 3 points.

(1) Are you testing HDDL plugin with TurboBoost disabled in BIOS?

 If not, please disable and test.
    (We are asking because TurboBoost cannot be activated due to our product specifications.)

 

(2) Please tell me the result of the following command.
 "dd if=/dev/zero of=/dev/null bs=1024K count=100000"

My environment is DDR3L 4GB single channel.
-----------------my result---------------------
# dd if=/dev/zero of=/dev/null bs=1024K count=100000
100000+0 records in
100000+0 records out
104857600000 bytes (105 GB, 98 GiB) copied, 20.9918 s, 5.0 GB/s
-------------------------------------------------------

 


(3) Is hddl_service.config the same as the attached file?
    Can you attach the "hddl_service.config" that you use?
  As I reported before, when I executed mask_rcnn_demo, The following message was output.
   “Error: wait InferTaskSync (reqSeqNo = 3 taskId = 1) result timeout failed.”

 I think that the HDDL deamon may have timed out due to low machine specifications.

----------------------"mask_rcnn_demo" result-----------------------

root@9a8f10b19908:~/omz_demos_build/intel64/Release# ./mask_rcnn_demo -i airliner.jpg -m frozen_inference_graph.xml -
d HDDL
InferenceEngine: 0x7f2baf5a2040
[ INFO ] Parsing input parameters
[ INFO ] Files were added: 1
[ INFO ]     airliner.jpg
[ INFO ] Loading Inference Engine
[ INFO ] Device info:
        HDDL
        HDDLPlugin version ......... 2.1
        Build ........... custom_releases/2020/1_26d55a5f6c20c6b5578c63933b98c2faee462a48
[ INFO ] Loading network files
[ INFO ] Preparing input blobs
[ INFO ] Network batch size is 1
[ INFO ] Prepare image airliner.jpg
[ INFO ] Preparing output blobs
[ INFO ] Loading model to the device
[ INFO ] Create infer request
[ion_ioctl][77]ioctl c0484900 failed with code -1: Operation not permitted, fd(15)
ion alloc failed size=16205680
[ INFO ] Setting input data to the blobs
[ INFO ] Start inference
[HDDLPlugin] [23:58:17.6527][2716]ERROR[HddlClient.cpp:764] Error: wait InferTaskSync(reqSeqNo=3 taskId=1) result timeout failed.
[ ERROR ] _client->inferTaskSync(_graph, inferData) failed: HDDL_GENERAL_ERROR

-------------------------/var/log/daemon.log----------------------

  hddldaemon[275]: +-------------+-------------------+-------------------+
 hddldaemon[275]: [20:48:06.3893][972]I[DeviceManager.cpp:916] DeviceSnapshot(subclass=0):
 hddldaemon[275]: | deviceId    | 0(00)             | 1(0x1)            |
 hddldaemon[275]: | device      | 4.1               | 4.3               |
 hddldaemon[275]: | util%       | 0.0 %             | 0.0 %             |
 hddldaemon[275]: | thermal     | 55.29(0)          | 52.46(0)          |
 hddldaemon[275]: | scheduler   | squeeze           | squeeze           |
 hddldaemon[275]: | comment     |                   |                   |
 hddldaemon[275]: | resetTimes  | 0                 | 0                 |
 hddldaemon[275]: | cacheNum    | 0                 | 0                 |
 hddldaemon[275]: | cacheGraph0 |                   |                   |
 hddldaemon[275]: | cacheGraph1 |                   |                   |
 hddldaemon[275]: | cacheGraph2 |                   |                   |
 hddldaemon[275]: | cacheGraph3 |                   |                   |
 hddldaemon[275]: +-------------+-------------------+-------------------+
 hddldaemon[275]: | status      | LOAD_GRAPH        | LOAD_GRAPH        |
 hddldaemon[275]: | fps         |                   |                   |
 hddldaemon[275]: | curGraph    |                   |                   |
 hddldaemon[275]: | rPriority   |                   |                   |
 hddldaemon[275]: | loadTime    |                   |                   |
 hddldaemon[275]: | runTime     |                   |                   |
 hddldaemon[275]: | inference   |                   |                   |
 hddldaemon[275]: | prevGraph   |                   |                   |
 hddldaemon[275]: | loadTime    |                   |                   |
 hddldaemon[275]: | unloadTime  |                   |                   |
 hddldaemon[275]: | runTime     |                   |                   |
 hddldaemon[275]: | inference   |                   |                   |
 hddldaemon[275]: +-------------+-------------------+-------------------+
 hddldaemon[275]: [20:48:06.5275][1000]I[GraphManager.cpp:491] Load graph success
 hddldaemon[275]: #033[1;31;40m[20:48:06.6480][3441]ERROR[ShareMemory_linux.cpp:57] Error: shm_open() failed: errno=2 (No such file or directory)
 hddldaemon[275]: #033[0m
 hddldaemon[275]: #033[1;31;40m[20:48:06.6481][3441]ERROR[HddlMyriadXDevice.cpp:1379] Error: share memory buffer('hddl_57_140271028352832_1') mapping failed#033[0m
 hddldaemon[275]: #033[1;31;40m[20:48:06.6482][3441]ERROR[HddlMyriadXDevice.cpp:766] Error: map outputBuffer failed
 hddldaemon[275]: [20:48:08.7010][3441]I[HddlMyriadXDevice.cpp:770] Getting result in the case of mapBuffer failed: taskIdOrigin=2 taskIdAttach=2
 hddldaemon[275]: #033[1;31;40m[20:48:08.7011][3441]ERROR[SlaveWorker.cpp:208] [Wt4.1S1] Error: getResult(2) failed
 hddldaemon[275]: [20:48:08.7012][3441]I[SlaveWorker.cpp:282] [Wt4.1S1] Return task(2) to TaskManager
 hddldaemon[275]: #033[1;31;40m[20:48:08.7847][3443]ERROR[ShareMemory_linux.cpp:57] Error: shm_open() failed: errno=2 (No such file or directory)
 hddldaemon[275]: #033[0m
 hddldaemon[275]: #033[1;31;40m[20:48:08.7849][3443]ERROR[HddlMyriadXDevice.cpp:1379] Error: share memory buffer('hddl_57_140271028352832_1') mapping failed#033[0m
 hddldaemon[275]: #033[1;31;40m[20:48:08.7849][3443]ERROR[HddlMyriadXDevice.cpp:766] Error: map outputBuffer failed
 hddldaemon[275]: [20:48:10.8397][3443]I[HddlMyriadXDevice.cpp:770] Getting result in the case of mapBuffer failed: taskIdOrigin=2 taskIdAttach=2
 hddldaemon[275]: #033[1;31;40m[20:48:10.8398][3443]ERROR[SlaveWorker.cpp:208] [Wt4.3S1] Error: getResult(2) failed
 hddldaemon[275]: [20:48:10.8398][3443]I[SlaveWorker.cpp:282] [Wt4.3S1] Return task(2) to TaskManager

---------------------------------------------------------------------

Best Regards,

0 Kudos
Max_L_Intel
Moderator
1,701 Views

(1) Are you testing HDDL plugin with TurboBoost disabled in BIOS?

 If not, please disable and test.
    (We are asking because TurboBoost cannot be activated due to our product specifications.)

With Turbo Mode disabled and using HDDL plugin we got this:

Count: 44 iterations 
Duration: 70846.00 ms
Latency: 6166.55 ms
Throughput: 0.62 FPS
 

(2) Please tell me the result of the following command.
 "dd if=/dev/zero of=/dev/null bs=1024K count=100000"

My environment is DDR3L 4GB single channel.
-----------------my result---------------------
# dd if=/dev/zero of=/dev/null bs=1024K count=100000
100000+0 records in
100000+0 records out
104857600000 bytes (105 GB, 98 GiB) copied, 20.9918 s, 5.0 GB/s
-------------------------------------------------------

We are using AI Edge UP2 with 8GB DDR4. The result of this command is:

100000+0 records in
100000+0 records out
104857600000 bytes (105 GB, 98 GiB) copied, 22.2554 s, 4.7 GB/s
 

(3) Is hddl_service.config the same as the attached file?
    Can you attach the "hddl_service.config" that you use?
  As I reported before, when I executed mask_rcnn_demo, The following message was output.
   “Error: wait InferTaskSync (reqSeqNo = 3 taskId = 1) result timeout failed.”

 I think that the HDDL deamon may have timed out due to low machine specifications.

Yes, it looks the same. I'm sending this over PM just in case.

0 Kudos
taka
Beginner
1,701 Views

Hi, Max

Thank you for the test.

 

I'm very sorry.
Can you create a container with the attached dockerfile and run benchmark_app.py or mask_rcnn_demo on the container?

(Atom x7-E3950 &HDDL environment)

------------------build command-------------------------

docker build -t ubuntu1804:openvino_2020R1 .

docker run -it --device /dev/dri --device=/dev/ion:/dev/ion -v /var/tmp:/var/tmp --name openvino -d ubuntu1804:openvino_2020R1 /bin/bash

docker exec -it openvino /bin/bash

--------------------------------------------------------------

 

If this doesn't reproduce the problem, I will wait for the next version of OpenVINO....

 

Best Regards,

 

 

0 Kudos
Max_L_Intel
Moderator
1,701 Views

Hello Taka.

Unfortunately, we don't currently have a chance to test any applications within Atom x7-E3950 & HDDL environment.
Please try it from your end with recently released OpenVINO toolkit 2020.2 build from https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit/choose-download/linux.html

Thanks.

0 Kudos
taka
Beginner
1,404 Views

Hi, Max

 

Unfortunately, OpenVINO2020.2 didn't improve it either.

The error log is posted.

----------------omz_demos_build/intel64/Release/mask_rcnn_demo-----------------
[setupvars.sh] OpenVINO environment initialized
InferenceEngine: 0x7f138b86f030
[ INFO ] Parsing input parameters
[ INFO ] Files were added: 1
[ INFO ]     /root/image/test.jpg
[ INFO ] Loading Inference Engine
[ INFO ] Device info:
        HDDL
        HDDLPlugin version ......... 2.1
        Build ........... custom_releases/2020/2_d8830cd2bfe6199444299be1d92b281e5459f195
[ INFO ] Loading network files
[ INFO ] Preparing input blobs
[ INFO ] Network batch size is 1
[ INFO ] Prepare image /root/image/test.jpg
[ INFO ] Preparing output blobs
[ INFO ] Loading model to the device
[ INFO ] Create infer request
[ion_ioctl][82]ioctl c0484900 failed with code -1: Operation not permitted, fd(15)
ion alloc failed size=8102832 
[ INFO ] Setting input data to the blobs
[ INFO ] Start inference
[HDDLPlugin] [04:24:08.7721][352]ERROR[HddlClient.cpp:764] Error: wait InferTaskSync(reqSeqNo=3 taskId=1) result timeout failed.
[ ERROR ] _client->inferTaskSync(_graph, inferData) failed: HDDL_GENERAL_ERROR


 ---------------------------------HDDL Log ---------------------------------------
[13:23:08.8331][12351]ERROR[ShareMemory_linux.cpp:57] Error: shm_open() failed: errno=2 (No such file or directory)

[13:23:08.8333][12351]ERROR[HddlMyriadXDevice.cpp:1370] Error: share memory buffer('hddl_352_139721931513920_1') mapping failed
[13:23:08.8334][12351]ERROR[HddlMyriadXDevice.cpp:764] Error: map outputBuffer failed, device=4.1 taskId=1
[ion_ioctl][82]ioctl c0484900 failed with code -1: Operation not permitted, fd(9)
ion alloc failed size=8102848
[13:23:10.7929][12351]I[HddlMyriadXDevice.cpp:768] Getting result in the case of mapBuffer failed: taskIdOrigin=1 taskIdAttach=1
[13:23:10.7946][12351]ERROR[SlaveWorker.cpp:208] [Wt4.1S0] Error: getResult(1) failed, rc=-119
[13:23:10.7947][12351]I[SlaveWorker.cpp:282] [Wt4.1S0] Return task(1) to TaskManager, loadCount=1 error=-119
[13:23:10.8574][12353]ERROR[ShareMemory_linux.cpp:57] Error: shm_open() failed: errno=2 (No such file or directory)

[13:23:10.8576][12353]ERROR[HddlMyriadXDevice.cpp:1370] Error: share memory buffer('hddl_352_139721931513920_1') mapping failed
[13:23:10.8576][12353]ERROR[HddlMyriadXDevice.cpp:764] Error: map outputBuffer failed, device=4.3 taskId=1 
 

0 Kudos
Reply