- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello,
Is it possible to use Intel Inference Engine with variable input shape? In Caffe, I have a fully convolutional network which can accept images of practically any size as input but I haven't found how to properly apply Inference Engine without regenerating and reloading network for each possible pair of image (width, height). That's obviously not an option because of the speed issues.
Thanks.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Justas!
It's posisble to use Inference Engine with variable input size starting from 2018 R3 release of OpenVINO™ toolkit.
Please refer to the "Shape Inference" feature, which allows to change the model input size after reading IR and before loading ICNNNetwork to a plugin.
Best regards,
Nikolay
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear Justas,
the IE cannot handle variable input shapes, the height and width must be defined during the Model Optimizer step. Just the batch size can be dynamically changed for certain topologies at the IE stage.
Best,
Severine
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Justas!
It's posisble to use Inference Engine with variable input size starting from 2018 R3 release of OpenVINO™ toolkit.
Please refer to the "Shape Inference" feature, which allows to change the model input size after reading IR and before loading ICNNNetwork to a plugin.
Best regards,
Nikolay
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page