Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Variable input size in 2018 R3

wang__jiajun
Beginner
838 Views

In the release note of 2018 R3, it mentions variable model input size:

Feature preview for Shape Inference. This feature allows you to change the model input size after reading the IR in the Inference Engine (without need to go back to the Model Optimizer).

How can I change the input size dynamically in the python API during inference?

0 Kudos
7 Replies
Mikhail_T_Intel
Employee
838 Views

Hi Jiajun

To change input size you can reshape the network but it should be done before the network will be loaded to the plugin. See bellow the code snippet example which demonstrate how to reshape the network in python:

net = IENetwork.from_ir(model=path_to_the_xml, weights=path_to_the_bin)
input_layer = next(iter(net.inputs))
n, c, h, w = net.inputs[input_layer]
net.reshape({input_layer: (n, c, h*2, w*2)}]
0 Kudos
wang__jiajun
Beginner
838 Views

I have loaded a fully convolutional squeezenet, but get the following error when reshaping size:

----> 1 net.reshape({input_layer: (n, c, h*2, w*2)})

ie_api.pyx in inference_engine.ie_api.IENetwork.reshape()

RuntimeError: Dims and format are inconsistent.
/home/user/teamcity/work/scoring_engine_build/releases_openvino-2018-r3/src/inference_engine/ie_layouts.cpp:245
/home/user/teamcity/work/scoring_engine_build/releases_openvino-2018-r3/include/details/ie_exception_conversion.hpp:80

 

0 Kudos
Nikolay_L_Intel1
Employee
838 Views

Hi Jiajun!

Is it possible to share your model to make sure we are looking at the same toopolgy?

Regards,
Nikolay 

0 Kudos
wang__jiajun
Beginner
838 Views

After careful inspection, I think the reshaping error comes from the Flatten layer.

Flatten layer should squeeze axes between "axis" and "end_axis", but the implementation in inference engine doesn't work as expected.

BTW, the compressed model file is attached below.

0 Kudos
Nikolay_L_Intel1
Employee
838 Views

Hi Jiajun!

Thanks for sending the model! Was it converted from Caffe or MXNet?
A fix for this issue should appear in the next release, for MXNet at least. But for now you could avoid this error by removing `dim=0` for the Flatten layer in IR. 

Best regards,
Nikolay

0 Kudos
wang__jiajun
Beginner
838 Views

The model was converted from MXNet.

Thanks for your reply, now the model works.

0 Kudos
Wong__Connor
Beginner
838 Views

I understand that the current way is to reshape the network before loading into plugin. In my work I need to reshape it several times, and thus triggers multiple plugin loading (each of which takes 0.5s-1s). This is rather expensive. Is there any faster way that avoids this? The inferencing part is definitely faster on OpenVINO compared to Tensorflow CPU, but TF allows dynamic input size that avoids such network reshaping and reloading, so the net gain in performance of using OpenVINO in this case is not as much. Any advice?

0 Kudos
Reply