- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
There are two ways to perform inference with OpenVINO, which are
1). Inference by loading a model in "onnx" format
2). Converting the model to "IR" format for inference
Someone reported that the inference speed with IR is slightly faster compared to .onnx though accuracy with IR is slightly worse than .oonx.
Is this likely, especially? If it's possible, why IR can acheive faster speed for inference?
I cound not find good explantion in https://docs.openvino.ai/2022.1/index.html#
I would like to know it.
Best regards.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Timosy,
Thanks for reaching out to us.
For your information, Model Optimizer not only converts a model to Intermediate Representation, but also performs several optimizations. For example, certain primitives like linear operations (BatchNorm and ScaleShift), are automatically fused into convolutions as shown here.
On the other hand, you may refer to the following links for more information on performing inference with IR and ONNX using OpenVINO™ Toolkit:
· Introducing: ONNX Format Support for the Intel® Distribution of OpenVINO™ toolkit
· How to Speed Up Deep Learning Inference Using OpenVINO Toolkit
Regards,
Wan
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Timosy,
Thanks for reaching out to us.
For your information, Model Optimizer not only converts a model to Intermediate Representation, but also performs several optimizations. For example, certain primitives like linear operations (BatchNorm and ScaleShift), are automatically fused into convolutions as shown here.
On the other hand, you may refer to the following links for more information on performing inference with IR and ONNX using OpenVINO™ Toolkit:
· Introducing: ONNX Format Support for the Intel® Distribution of OpenVINO™ toolkit
· How to Speed Up Deep Learning Inference Using OpenVINO Toolkit
Regards,
Wan
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks for the useful infoamtion for me!
best regards.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Timosy,
Thanks for your question.
This thread will no longer be monitored since we have provided information.
If you need any additional information from Intel, please submit a new question.
Best regards,
Wan
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page