Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6405 Discussions

ONNX model with dynamic batch size using OpenVINO 2022.2 C++ API

Benguigui__Michael
1,016 Views

Hi!

Using OpenVINO 2022.2.0 C++ API, i am facing difficulties to convert my ONNX model into IR (xml + bin) before loading into a CompiledModel. Batch size is dynaimc and my targeted device is the NCS2 (Myriad X).


The model (auto encodeur) input shape:

tensor: float32[batch_size,8,32,32]



What I initially did:

...
std::shared_ptr<ov::Model> model = core.read_model(onnxModelFilePath);
...
ov::serialize(model, irModelFilesPath + ".xml", irModelFilesPath + ".bin");
core.set_property("MYRIAD", {{"MYRIAD_ENABLE_HW_ACCELERATION", true}});
...
ov::CompiledModel compiledModel = core.compile_model(irModelFilesPath + ".xml", targetDevice);
...

 

The error i got:

what(): Cannot get length of dynamic dimension
ov::serialize(model, irModelFilesPath + ".xml", irModelFilesPath + ".bin");

I've read (in one of Intel dev team answers):
==>
"I think the underlying issue is that your model contains dynamic shapes which Myriad plugin doesn't support".

 

Therefore, by adding

model->reshape({1,8,32,32});

right before

ov::serialize(model, irModelFilesPath + ".xml", irModelFilesPath + ".bin");

I am getting:

terminate called after throwing an instance of 'ov::NodeValidationFailure'
what(): Check '((axis >= min_value) && (axis <= max_value))' failed at core/src/validation_util.cpp:863:
While validating node 'v4::ReduceL2 ReduceL2_13 (Gemm_12[0]:f32{64}, Constant_257[0]:i64{1}) -> (f32...)' with friendly_name 'ReduceL2_13':
Parameter axis 1 out of the tensor rank range [-1, 0].

For info, ReduceL2_13 matches with one of my last layer (cf png attached).

 

Is there any other things I could test to tackle this issue from my C++ code ? By Using the reshape function, is the static batch size well propagated accross the network ?


Regards
Michael ++

0 Kudos
4 Replies
Aznie_Intel
Moderator
991 Views

Hi Benguigui_Michael.,

 

Thanks for reaching out.

 

Have you tried converting your ONNX model with a static shape?

mo --input_model model.onnx --use_new_frontend --input_shape [1,x,x,x] --static_shape



Regards,

Aznie


0 Kudos
Benguigui__Michael
972 Views

Thanks a lot for your help.

 

docker run --rm -it -u 0 -v /my_home/onnx_models:/onnx_models openvino/ubuntu18_dev:2022.2.0 bash

And then

python3 /usr/local/lib/python3.6/dist-packages/openvino/tools/mo/mo.py --input_model /onnx_models/TinyEncoder_PhiSAT.onnx --use_new_frontend --input_shape [1,8,32,32] --static_shape

Gives

Model Optimizer arguments:
Common parameters:
        - Path to the Input Model:      /onnx_models/TinyEncoder_PhiSAT.onnx
        - Path for generated IR:        /opt/intel/.
        - IR output name:       TinyEncoder_PhiSAT
        - Log level:    ERROR
        - Batch:        Not specified, inherited from the model
        - Input layers:         Not specified, inherited from the model
        - Output layers:        Not specified, inherited from the model
        - Input shapes:         [1,8,32,32]
        - Source layout:        Not specified
        - Target layout:        Not specified
        - Layout:       Not specified
        - Mean values:  Not specified
        - Scale values:         Not specified
        - Scale factor:         Not specified
        - Precision of IR:      FP32
        - Enable fusing:        True
        - User transformations:         Not specified
        - Reverse input channels:       False
        - Enable IR generation for fixed input shape:   True
        - Use the transformations config file:  None
Advanced parameters:
        - Force the usage of legacy Frontend of Model Optimizer for model conversion into IR:   False
        - Force the usage of new Frontend of Model Optimizer for model conversion into IR:      True
OpenVINO runtime found in:      /opt/intel/openvino/python/python3.6/openvino
OpenVINO runtime version:       2022.2.0-7713-af16ea1d79a-releases/2022/2
Model Optimizer version:        2022.2.0-7713-af16ea1d79a-releases/2022/2
[ ERROR ]  -------------------------------------------------
[ ERROR ]  ----------------- INTERNAL ERROR ----------------
[ ERROR ]  Unexpected exception happened.
[ ERROR ]  Please contact Model Optimizer developers and forward the following information:
[ ERROR ]  While validating ONNX node '<Node(ReduceL2): ReduceL2_13>':
Check '((axis >= min_value) && (axis <= max_value))' failed at core/src/validation_util.cpp:863:
While validating node 'v4::ReduceL2 ReduceL2_258 (Gemm_12[0]:f32{64}, Constant_257[0]:i64{1}) -> (dynamic...)' with friendly_name 'ReduceL2_258':
 Parameter axis 1 out of the tensor rank range [-1, 0].

[ ERROR ]  Traceback (most recent call last):
  File "/usr/local/lib/python3.6/dist-packages/openvino/tools/mo/main.py", line 533, in main
    ret_code = driver(argv)
  File "/usr/local/lib/python3.6/dist-packages/openvino/tools/mo/main.py", line 489, in driver
    graph, ngraph_function = prepare_ir(argv)
  File "/usr/local/lib/python3.6/dist-packages/openvino/tools/mo/main.py", line 394, in prepare_ir
    ngraph_function = moc_pipeline(argv, moc_front_end)
  File "/usr/local/lib/python3.6/dist-packages/openvino/tools/mo/moc_frontend/pipeline.py", line 151, in moc_pipeline
    ngraph_function = moc_front_end.convert(input_model)
RuntimeError: While validating ONNX node '<Node(ReduceL2): ReduceL2_13>':
Check '((axis >= min_value) && (axis <= max_value))' failed at core/src/validation_util.cpp:863:
While validating node 'v4::ReduceL2 ReduceL2_258 (Gemm_12[0]:f32{64}, Constant_257[0]:i64{1}) -> (dynamic...)' with friendly_name 'ReduceL2_258':
 Parameter axis 1 out of the tensor rank range [-1, 0].


[ ERROR ]  ---------------- END OF BUG REPORT --------------
[ ERROR ]  -----------------------------------------------

 ... A quite similar error...

0 Kudos
Aznie_Intel
Moderator
958 Views

 

Hi Benguigui_Michale,

 

The ReduceL2_258 Operation is not supported in OpenVINO. To use the model, I would advise you either to remove the operation or change the ReduceL2_258 operation to another operation that is supported based on ONNX* Supported Operators.

 

 

Regards,

Aznie

 

0 Kudos
Aznie_Intel
Moderator
877 Views

Hi Benguigui_Michael,


This thread will no longer be monitored since we have provided a solution. If you need any additional information from Intel, please submit a new question.



Regards,

Aznie





0 Kudos
Reply