Dear Intel Guru,
Hi, I have tried OpenVino, and it is awesome.
I'm having another question.
My input model size is [1,3, height, width]
I just tried, and it seems it does support variable input size.
Using the mo.py, though the warming still mentioned that it is still strongly suggested to use static input size.
The results are different with original ONNX format.
Anyone can help?
Here is the model link:
and the command is :
$python mo.py --input_model model.onnx --output_dir ~/Downloads
Hi @TonyWong ,
From the Model Optimizer, you can feed the input shape by using --input_shape parameter.
Example: python mo.py --input_model model.onnx --output_dir ~/Downloads --input_shape [1,3,227,227]
Note that the order of dimension depends on the framework input layout of the model.
1. Caffe model: [N,C,H,W]
2. Tensorflow model: [N,H,W,C]
Model Optimizer requires [N,C,H,W] layout and would perform the necessary transformation to the model. The shape should not contain undefined dimensions such as (? or -1).
Depending on your model, you need to consider these:
Hope this helps!
Intel will no longer monitor this thread since we have provided a solution. If you need any additional information from Intel, please submit a new question.