Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6442 Discussions

toch.compile() vs ONNX backend vs OpenVINO IR backend

forward_pass
Beginner
336 Views
I plan to deploy a pytorch model on a device with an intel cpu, in a native python program. The openVINO documentation states 3 options for deployment: 1) torch.compile() 2) ONNX 3) OpenVINO IR format (.xml and .bin) Which is the most appropriate route? What are the differences between each?
0 Kudos
2 Replies
Megat_Intel
Moderator
280 Views

Hi Forward_pass,

Thank you for reaching out to us.

 

When running the inference directly from the ONNX source format, the model conversion happens automatically and is handled by OpenVINO™. This method is convenient but might not give the best performance or stability. It also does not provide you with optimization options.

 

On the other hand, using the torch.compile() method is similar in that the model conversion happens automatically. With this method, the supported operators in the model are converted using OpenVINO's Pytorch decoder and executed using OpenVINO™ runtime. Meanwhile, all unsupported operators fall back to the native PyTorch runtime on the CPU. For more information, you can check out the PyTorch Deployment via “torch.compile” page.

 

Running OpenVINO™ inference with IR model format offers the best possible results and is the recommended one. This format offers lower first-inference latency and options for model optimizations. This format is the most optimized for OpenVINO™ inference hence it can be deployed with maximum performance. You can read more about this on the Model Preparation page.

 

We recommend using the OpenVINO™ IR format for the best performance. On the other hand, you could also try out and compare the difference in performance between the methods by running a benchmark test. Please refer to our Throughput Benchmark Python for more information.

 

 

Regards,

Megat


0 Kudos
Megat_Intel
Moderator
187 Views

Hi Forward_pass,

Thank you for your question. This thread will no longer be monitored since we have provided a solution. If you need any additional information from Intel, please submit a new question. 

 


Regards,

Megat


0 Kudos
Reply