Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6404 Discussions

What is the difference between onnx and IR in terms of speed of inference

timosy
New Contributor I
727 Views

There are two ways to perform inference with OpenVINO, which are

1). Inference by loading a model in "onnx" format
2). Converting the model to "IR" format for inference

 

Someone reported that the inference speed with IR is slightly faster compared to .onnx though accuracy with IR is slightly worse than .oonx.

 

Is this likely, especially? If it's possible, why IR can acheive faster speed for inference?

I cound not find good explantion in https://docs.openvino.ai/2022.1/index.html#

I would like to know it.

Best regards. 

 

Labels (3)
0 Kudos
1 Solution
Wan_Intel
Moderator
695 Views

Hi Timosy,

Thanks for reaching out to us.

For your information, Model Optimizer not only converts a model to Intermediate Representation, but also performs several optimizations. For example, certain primitives like linear operations (BatchNorm and ScaleShift), are automatically fused into convolutions as shown here.

 

On the other hand, you may refer to the following links for more information on performing inference with IR and ONNX using OpenVINO™ Toolkit:

·      Introducing: ONNX Format Support for the Intel® Distribution of OpenVINO™ toolkit

·      How to Speed Up Deep Learning Inference Using OpenVINO Toolkit

 

 

Regards,

Wan


View solution in original post

0 Kudos
3 Replies
Wan_Intel
Moderator
696 Views

Hi Timosy,

Thanks for reaching out to us.

For your information, Model Optimizer not only converts a model to Intermediate Representation, but also performs several optimizations. For example, certain primitives like linear operations (BatchNorm and ScaleShift), are automatically fused into convolutions as shown here.

 

On the other hand, you may refer to the following links for more information on performing inference with IR and ONNX using OpenVINO™ Toolkit:

·      Introducing: ONNX Format Support for the Intel® Distribution of OpenVINO™ toolkit

·      How to Speed Up Deep Learning Inference Using OpenVINO Toolkit

 

 

Regards,

Wan


0 Kudos
timosy
New Contributor I
679 Views

Thanks for the useful infoamtion for me!

best regards.

Wan_Intel
Moderator
659 Views

Hi Timosy,

Thanks for your question.

This thread will no longer be monitored since we have provided information. 

If you need any additional information from Intel, please submit a new question.

 

 

Best regards,

Wan


Reply