Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Error on Infer Retina Net Fast.ai with MYRIAD

Truong__Dien_Hoa
New Contributor II
919 Views

I'm trying to run the retina net for object detection on Myriad. The example of model you can find here: https://github.com/ChristianMarzahl/ObjectDetection

I converted first the model to .onnx then the optimizer ran without any problem with FP16.

However, I have problem running the inference on both MYRIAD and GPU.

With GPU, the problem is:

RuntimeError: Unsupported layout (Unknown) in output: 255/Output_0/Data__const

So I came back to the model .xml file and removed the 2 last Data_const layers, then it works. Is the const layer is not supported in openvino ? It is just the const so I hope I can do some tweaks to fix the output.

With Myriad, I don't know how the deal with the problem. I got:

RuntimeError: AssertionFailed: !ieDims.empty()
I: [ncAPI] [    923019] ncDeviceClose:1496    Removing device...
I: [ncAPI] [    923779] destroyDeviceHandle:1439    Destroying device handler

After I cut down 2 last layers like with GPU, the problem changed to:

RuntimeError: AssertionFailed: minConcatDimInd < dimsOrder.numDims()
I: [ncAPI] [    189179] ncDeviceClose:1496    Removing device...
I: [ncAPI] [    189939] destroyDeviceHandle:1439    Destroying device handler

 

I attached here the onnx model and model after converted use optimizer. I used the last version of openvino which is 2019.1.144

 

Can someone help me on this ? Thank you

0 Kudos
11 Replies
Shubha_R_Intel
Employee
919 Views

Dear Truong, Dien Hoa

You are not the first person who has reported issues with retina net. I also need a short script or program which runs inference. Do you have that too ? Can you attach that as well ?

Thanks for using OpenVino !

Shubha

0 Kudos
Truong__Dien_Hoa
New Contributor II
919 Views

Dear Shubha,

 

Thanks for your reply. I use just the python benchmark sample given in openvino.

 

Hope you can find the issues.

 

Best regards,

Hoa

0 Kudos
Truong__Dien_Hoa
New Contributor II
919 Views

Hi,

Sorry for disturbing but do you have any updates from this thread ?

I have tried with removing 2 last layers data_const and inference with GPU. Unfortunately, it does not work. the shape is different and output are all zeros. I don't understand why the data const can affect the result.

Thank you

Best regards,

Hoa

0 Kudos
Shubha_R_Intel
Employee
919 Views

Dear Truong, Dien Hoa,

You are definitely not disturbing me. Unfortunately i have not had a chance to look into it yet but I promise I will ! Maybe tomorrow I will. Thanks for your patience,

Shubha

 

0 Kudos
Shubha_R_Intel
Employee
919 Views

Dear Truong, Dien Hoa,

I finally had a chance to reproduce your issues on GPU, CPU and MYRIAD. Sorry that it took so long. The problems you are experiencing are definitely a bug and in fact, I just now filed a bug. Retinanet is definitely supported. In fact there is a retinanet.json under deployment_tools\model_optimizer\extensions\front\tf.

I will post updates here.

Thanks for your patience and sorry for the trouble !

Shubha

 

0 Kudos
Truong__Dien_Hoa
New Contributor II
919 Views

Thanks Shubha,

I'm happy that it is supported. I think that's why I ran model_optimizer without problem.

Hope you will see where is the problem in inference.

Best regards

0 Kudos
Shubha_R_Intel
Employee
919 Views

Dear Truong, Dien Hoa,

The issue is  OpenVino plugins don't support 0-d tensors (scalars). The way you handled it for GPU (manually editing the IR file) is the correct way to handle it, although frankly Model Optimizer has the Model Cutting technique which should be used as --output <offensive layer>. But in this case I tried it and --output does not work. So the only way to deal with it is to manually remove the layer from the XML file just as you figured out. However, I'm still investigating why after fixing the XML, MYRIAD inference still fails.

Thanks for your patience !

Shubha

0 Kudos
Shubha_R_Intel
Employee
919 Views

Dear Truong, Dien Hoa,

python "c:\Program Files (x86)\IntelSWTools\openvino_2019.2.148\deployment_tools\model_optimizer\mo_onnx.py" --input_model retina.onnx --output "249" --log_level DEBUG works (model cutting) because 249 is a real output layer in your original ONNX model. In order for model cutting to work, the layer name has to be in the original model rather than a layer name created by Model Optimizer. For instance 255/Output_0/Data__const was not in your original model (it was generated by Model Optimizer) so the model cutting technique will not work on that layer.

Hope it helps,

Thanks,

Shubha

0 Kudos
Truong__Dien_Hoa
New Contributor II
919 Views

Hi Shubha,

Thanks. I check your solution and it works on CPU.

However for MYRIAD I still have problem.

[Info   ][VPU][MyriadPlugin] Device #0 MYRIAD-X allocated
[ ERROR ] AssertionFailed: minConcatDimInd < dimsOrder.numDims()
Traceback (most recent call last):
  File "/opt/intel/openvino_2019.1.094/deployment_tools/inference_engine/samples/python_samples/benchmark_app/benchmark/benchmark.py", line 99, in main
    exe_network = plugin.load(ie_network, args.number_infer_requests)
  File "ie_api.pyx", line 395, in openvino.inference_engine.ie_api.IEPlugin.load
  File "ie_api.pyx", line 406, in openvino.inference_engine.ie_api.IEPlugin.load
RuntimeError: AssertionFailed: minConcatDimInd < dimsOrder.numDims()
I: [ncAPI] [    145252] ncDeviceClose:1453    Removing device...
I: [ncAPI] [    145714] destroyDeviceHandle:1411    Destroying device handler

I can't check for GPU now but can you verify it please. Did you test with MYRIAD?

Best regards,

0 Kudos
Truong__Dien_Hoa
New Contributor II
919 Views

I can see we might use different version of Openvino. You use openvino_2019.2.148 and I use openvino_2019.1.094.

I update to the latest version on the site which is openvino_2019.1.144 but the problem is still there

 

[Update]:

I check the output of CPU and found weird. The shape is 288 and it is different of the shape of this retina net which is 24480. Actually, this model is in fast.ai which constructed from pytorch. And to be able to use Openvino I have to convert it to onnx. Do you know how I can explore the onnx model to verify it is converted correctly ? Thank you

0 Kudos
Shubha_R_Intel
Employee
919 Views

Dear Truong, Dien Hoa,

Yes this issue has been acknowledged - the fact that it benchmark_app fails with this (corrected) model on MYRIAD. I filed  a bug on the MYRIAD issue.

Thanks,

Shubha

 

0 Kudos
Reply