- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello,
Openvino version : 2020.3.341
I have a custom-trained ssd300 model that is in ONNX format and that has its input and output layers defined as follows:
---- input names, shapes, types: ----
input.1 [1, 3, 300, 300] tensor(float)
---- output names, shapes, types: ----
265 [1, 8732, 4] tensor(float)
278 [1, 183372] tensor(float)
279 [8732, 4] tensor(float)
I am able to inference from this model totally fine using onnxruntime.
I converted the model to IR format using mo.py (model optimizer) and this resultant model turns out to only have 2 output tensors and not 3. Here is the input and output info of the converted model:
Input name: input.1; shape: [1, 3, 300, 300]; dtype: DT_FLOAT
Output name: Reshape_170; shape: [1, 183372]; dtype: DT_FLOAT
Output name: Reshape_161; shape: [1, 8732, 4]; dtype: DT_FLOAT
I attach the onnx and the IR models in this zip file here.
Please note that the tensor '279' that appears in onnx and the equivalent of which does not appear in IR, is a constant. It appears that Openvino is not exposing this constant tensor - I think its equivalent is not even finding a place in the graph definition files.
How do I get hold of this constant tensor in the IR world? (I am doing a workaround of saving it as an npy file and loading the file)
Thank you,
-Buvana
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Buvana_R,
Thanks for reaching out. We are currently investigating this and will update you with the information soon.
Regards,
Aznie
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Buvana,
Greetings to you.
The Model optimizer has two main purposes:
- Produce a valid Intermediate Representation. If this main conversion artifact is not valid, the Inference Engine cannot run. The primary responsibility of the Model Optimizer is to produce the two files (.xml and .bin) that form the Intermediate Representation.
- Produce an optimized Intermediate Representation. In many cases, these operations can be automatically removed from the resulting Intermediate Representation. However, if a group of operations can be represented as a single mathematical operation, and thus as a single operation node in a model graph, the Model Optimizer recognizes such patterns and replaces this group of operation nodes with the only one operation. The result is an Intermediate Representation that has fewer operation nodes than the original model. This decreases the inference time.
Non-optimized models can be achieved by adding --finegrain_fusing during MO conversion (Converting a Model Using General Conversion Parameters - OpenVINO™ Toolkit) which the identified nodes will not be touched by any optimizations as in this link: Model Optimization Techniques - OpenVINO™ Toolkit
However, the optimized IR files are the expected behavior of Model Optimizer and appreciate if we could try to avoid it by using the finegrain_fusing parameter.
Meanwhile, I would recommend you upgrade your OpenVINO to our latest version (2021.3) for better features supportability.
Regards,
Aznie
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Buvana_R,
This thread will no longer be monitored since we have provided a solution. If you need any additional information from Intel, please submit a new question.
Regards,
Aznie

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page