Community
cancel
Showing results for 
Search instead for 
Did you mean: 
chen__xiaoyu
Beginner
146 Views

IEPlugin load input dim is wrong

Hi,

I convert a IR TSM-ResNet(a video understanding model) from onnx, the xml file seems fine, whose input is

<net batch="1" name="tsm_res50_model_cpu_optimal" version="4">
    <layers>
        <layer id="0" name="0" precision="FP32" type="Input">
            <output>
                <port id="0">
                    <dim>16</dim>
                    <dim>3</dim>
                    <dim>224</dim>
                    <dim>224</dim>
                </port>
            </output>
        </layer>

in which ,the  "16" is not the batch size but the frames needed.

But when I load the model with python sample, the input shape become 1, 3, 244, 244! And the ie report an error

[ INFO ] Batch size is 1
[ INFO ] Loading model to the plugin
Traceback (most recent call last):
  File "/MacUbuntuLocal/intel//computer_vision_sdk_2018.5.445/inference_engine/samples/python_samples/cxy_tsm.py", line 141, in <module>
    sys.exit(main() or 0)
  File "/MacUbuntuLocal/intel//computer_vision_sdk_2018.5.445/inference_engine/samples/python_samples/cxy_tsm.py", line 97, in main
    exec_net = plugin.load(network=net)
  File "ie_api.pyx", line 389, in openvino.inference_engine.ie_api.IEPlugin.load
  File "ie_api.pyx", line 400, in openvino.inference_engine.ie_api.IEPlugin.load
RuntimeError: AssertionFailed: output->desc().dimsOrder() == inDesc.dimsOrder()

 

Did the loader change the 16 -> 1 to limit the batch size (which is not)  to one? Any suggestions?

Thank you!

0 Kudos
6 Replies
chen__xiaoyu
Beginner
146 Views

Any suggestion?

 

I've encountered similar question as below, it seems to be she shape problem too

Traceback (most recent call last):
  File "/MacUbuntuLocal/intel//computer_vision_sdk_2018.5.445/inference_engine/samples/python_samples/cxy_tsm.py", line 144, in <module>
    sys.exit(main() or 0)
  File "/MacUbuntuLocal/intel//computer_vision_sdk_2018.5.445/inference_engine/samples/python_samples/cxy_tsm.py", line 100, in main
    exec_net = plugin.load(network=net)
  File "ie_api.pyx", line 389, in openvino.inference_engine.ie_api.IEPlugin.load
  File "ie_api.pyx", line 400, in openvino.inference_engine.ie_api.IEPlugin.load
RuntimeError: Incorrect blob sizes for node 384/Output_0/Data__const

 

Shubha_R_Intel
Employee
146 Views

Hi there. Where are you finding cxy_tsm.py ? I don't see it in the OpenVino installation. I have checked R4 and R5. Also please provide an exact URL to the ResNet model you are using.

Thanks,

Shubha

chen__xiaoyu
Beginner
146 Views

Shubha R. (Intel) wrote:

Hi there. Where are you finding cxy_tsm.py ? I don't see it in the OpenVino installation. I have checked R4 and R5. Also please provide an exact URL to the ResNet model you are using.

Thanks,

Shubha

Thank you Suhbha, all the related files are listed here:

https://1drv.ms/f/s!Anu8COkYdog4hUKAdwUizLzJugl7

 

The cxy_tsm.py is a copy of classification_sample.py , with the input changed for testing.

This is a model with input of shape (8,3,224,224), "8" is the frame size, which means it needs eight image in one inference.

 

chen__xiaoyu
Beginner
146 Views

@ Shubha R.

Is there any progress for this problem? 

Soni__Neha
Beginner
146 Views

HI @Shubha, I tried to load LSTM but got this error- Traceback (most recent call last): File "lstm_rasp.py", line 433, in sys.exit(main() or 0) File "lstm_rasp.py", line 210, in main exec_net4 = plugin.load(network=net4) File "ie_api.pyx", line 551, in openvino.inference_engine.ie_api.IEPlugin.load File "ie_api.pyx", line 561, in openvino.inference_engine.ie_api.IEPlugin.load RuntimeError: AssertionFailed: StatusCode::OK == network->getLayerByName(layerName.c_str(), layer, &resp) Please suggest some way. Looking Forward for your reply.
Shubha_R_Intel
Employee
146 Views

Dear Soni, Neha,

Perhaps you are also the author of the below dldt github post ?

https://github.com/opencv/dldt/issues/227

In order to help you I would need a *.zip file containing your model as well as a short inference script which demonstrates the issue.

Thanks,

Shubha