- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
In the release note of 2018 R3, it mentions variable model input size:
Feature preview for Shape Inference. This feature allows you to change the model input size after reading the IR in the Inference Engine (without need to go back to the Model Optimizer).
How can I change the input size dynamically in the python API during inference?
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Jiajun
To change input size you can reshape the network but it should be done before the network will be loaded to the plugin. See bellow the code snippet example which demonstrate how to reshape the network in python:
net = IENetwork.from_ir(model=path_to_the_xml, weights=path_to_the_bin) input_layer = next(iter(net.inputs)) n, c, h, w = net.inputs[input_layer] net.reshape({input_layer: (n, c, h*2, w*2)}]
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I have loaded a fully convolutional squeezenet, but get the following error when reshaping size:
----> 1 net.reshape({input_layer: (n, c, h*2, w*2)}) ie_api.pyx in inference_engine.ie_api.IENetwork.reshape() RuntimeError: Dims and format are inconsistent. /home/user/teamcity/work/scoring_engine_build/releases_openvino-2018-r3/src/inference_engine/ie_layouts.cpp:245 /home/user/teamcity/work/scoring_engine_build/releases_openvino-2018-r3/include/details/ie_exception_conversion.hpp:80
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Jiajun!
Is it possible to share your model to make sure we are looking at the same toopolgy?
Regards,
Nikolay
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Jiajun!
Thanks for sending the model! Was it converted from Caffe or MXNet?
A fix for this issue should appear in the next release, for MXNet at least. But for now you could avoid this error by removing `dim=0` for the Flatten layer in IR.
Best regards,
Nikolay
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The model was converted from MXNet.
Thanks for your reply, now the model works.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I understand that the current way is to reshape the network before loading into plugin. In my work I need to reshape it several times, and thus triggers multiple plugin loading (each of which takes 0.5s-1s). This is rather expensive. Is there any faster way that avoids this? The inferencing part is definitely faster on OpenVINO compared to Tensorflow CPU, but TF allows dynamic input size that avoids such network reshaping and reloading, so the net gain in performance of using OpenVINO in this case is not as much. Any advice?
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page