Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6392 Discussions

Conversion of my onnx model to IR results in a model with fewer number of outputs

Buvana_R
Beginner
659 Views

Hello,

Openvino version : 2020.3.341

I have a custom-trained ssd300 model that is in ONNX format and that has its input and output layers defined as follows:

---- input names, shapes, types: ----
input.1 [1, 3, 300, 300] tensor(float)
---- output names, shapes, types: ----
265 [1, 8732, 4] tensor(float)
278 [1, 183372] tensor(float)
279 [8732, 4] tensor(float)

I am able to inference from this model totally fine using onnxruntime.

I converted the model to IR format using mo.py (model optimizer) and this resultant model turns out to only have 2 output tensors and not 3. Here is the input and output info of the converted model:

Input name: input.1; shape: [1, 3, 300, 300]; dtype: DT_FLOAT


Output name: Reshape_170; shape: [1, 183372]; dtype: DT_FLOAT
Output name: Reshape_161; shape: [1, 8732, 4]; dtype: DT_FLOAT

I attach the onnx and the IR models in this zip file here.

Please note that the tensor '279' that appears in onnx and the equivalent of which does not appear in IR, is a constant. It appears that Openvino is not exposing this constant tensor - I think its equivalent is not even finding a place in the graph definition files.

How do I get hold of this constant tensor in the IR world? (I am doing a workaround of saving it as an npy file and loading the file)

Thank you,

-Buvana

0 Kudos
4 Replies
Buvana_R
Beginner
657 Views

Here is the zip of IR files

0 Kudos
IntelSupport
Community Manager
626 Views

Hi Buvana_R,

Thanks for reaching out. We are currently investigating this and will update you with the information soon.


Regards,

Aznie


0 Kudos
IntelSupport
Community Manager
616 Views

Hi Buvana,

Greetings to you.

The Model optimizer has two main purposes:


  • Produce a valid Intermediate Representation. If this main conversion artifact is not valid, the Inference Engine cannot run. The primary responsibility of the Model Optimizer is to produce the two files (.xml and .bin) that form the Intermediate Representation.
  • Produce an optimized Intermediate Representation. In many cases, these operations can be automatically removed from the resulting Intermediate Representation. However, if a group of operations can be represented as a single mathematical operation, and thus as a single operation node in a model graph, the Model Optimizer recognizes such patterns and replaces this group of operation nodes with the only one operation. The result is an Intermediate Representation that has fewer operation nodes than the original model. This decreases the inference time.

 

Non-optimized models can be achieved by adding --finegrain_fusing during MO conversion (Converting a Model Using General Conversion Parameters - OpenVINO™ Toolkit) which the identified nodes will not be touched by any optimizations as in this link: Model Optimization Techniques - OpenVINO™ Toolkit

 

However, the optimized IR files are the expected behavior of Model Optimizer and appreciate if we could try to avoid it by using the finegrain_fusing parameter.

 

Meanwhile, I would recommend you upgrade your OpenVINO to our latest version (2021.3) for better features supportability.

 

Regards,

Aznie


0 Kudos
IntelSupport
Community Manager
598 Views

Hi Buvana_R,

This thread will no longer be monitored since we have provided a solution. If you need any additional information from Intel, please submit a new question.


Regards,

Aznie


0 Kudos
Reply