- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
Currently, I am working on a custom network design of SSD (Single shot detection) using PyTorch. SSD requires Non Maximum Suppression (NMS) on its output layers. I am using torch.export.onnx method to export my model.
I have already exported my model using onnx opset-11, since nms is only supported on opset > 9.
I have successfully optimized my model using OPENVINO optimizer (mo_onnx.py).
I encountered an Error once I tried to load my IR using OPENVINO(Image attached),
When I visualize both models (ONNX opset-9 and onnx opset-11), i can see that there are some differences in the input shape of clip layer.
Any solution or suggestion to solve this problem? and how can the model pass openvino optimization, then it fails at loading?
I am using latest openvino 2020.R3, and I tried using opset-10 and it doesn't work as well.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Inference Engine now supports NMS layer by CPU plugin via Extensibility mechanism and also by Shape Inference feature.
Please have a try.
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi @ibrahimsoliman
MobileNet SSD PyTorch topology has not been officially validated for ONNX-IR conversion via Model Optimizer, so unfortunately we cannot guarantee it is being workable with OpenVINO toolkit. You can find the list of supported PyTorch topologies here https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_ONNX.html#supported_pytorch_models_via_onnx_conversion
We apologize for the inconvenience.
Best regards, Max.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Ok, thanks.
Just want to mention that I removed NMS from SSD Pytorch then I can optimize and infer using OpenVINO. but unfortunately, I need to manually apply NMS on my output externally from my model inference.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thank you for sharing this information with OpenVINO community!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi, I have checked the release note for OpenVINO 2020.4, i noticed the support for clip-11 has been added.
I successfully run onnx model optimizer again including NMS,
But when i load the network i got different error :
RuntimeError: get_shape was called on a descriptor::Tensor with dynamic shape
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Inference Engine now supports NMS layer by CPU plugin via Extensibility mechanism and also by Shape Inference feature.
Please have a try.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Intel will no longer monitor this thread since we have provided a solution. If you need any additional information from Intel, please submit a new question.
- Tags:
- Intel will no longer monitor this thread since we have provided a solution. If you need any additional information from Intel
- please submit a new question.

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page