Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6454 Discussions

PyTorch benchmark via “torch.compile” with OpenVINO

AmandaLee
New Contributor I
324 Views

Hi Expert,

 

I've been conducting some tests to compare the FPS of model execution using OpenVINO as a backend engine for torch.compile with that of native Torch code.

 

  1. OpenVINO torch.compile did not demonstrate a significant performance advantage over native Torch code execution on either Intel CPU or Intel GPU devices. In some models, OpenVINO compile even resulted in slightly lower FPS compared to native Torch code on Intel CPU.
  2. How to verify that the OpenVINO engine is indeed being utilized when running the model on Intel CPU/GPU hardware.
  3. Does Intel have any reference data available for OpenVINO torch.compile performance?

 

Platform: Meteor Lake Core Ultra7 155H

OS: Ubuntu 22.04

Torch: 2.1.0

OpenVINO: 2024.3

 

mobilenet_v2:

native torch: 95.70 FPS

OpenVINO CPU: 69.90 FPS

OpenVINO GPU: 59.61 FPS

resnet50:

native torch: 19.51 FPS

OpenVINO CPU: 21.28 FPS

OpenVINO GPU: 22.04 FPS

alexnet:

native torch: 105.66 FPS

OpenVINO CPU: 106.27 FPS

OpenVINO GPU: 108.99 FPS

 

Code:

 

import openvino.torch
import torch

model = torch.hub.load("pytorch/vision:v0.10.0", yolov5s, pretrined=True)
model.eval()

model_opt = torch.compile(model, backend="openvino", options = {"device" : "CPU"})
input_data = torch.rand((1,3,224,224))

num_iterations = 1000
start_time = time.time()
for _ in range(num_iterations)
    with torch.no_grad():
        output = model(input_data)

end_time = time.time()
FPS = num_iterations/(end_time-start_time)

 

 

 

I would greatly appreciate any suggestions or insights. Thank you.

 

Regards,

Amanda Lee

0 Kudos
1 Solution
Peh_Intel
Moderator
136 Views

Hi Amanda Lee,


I can see a huge significant performance improvement using OpenVINO backend when running the mobilenet_v2 model on CPU.



Platform:12th Gen Intel(R) Core(TM) i5-12450H

OS: Ubuntu 22.04 (running in VM VirtualBox)

Torch: 2.1.0

OpenVINO: 2024.2.0


Here is my results:

native torch: 37.77 FPS

OpenVINO CPU: 80.89 FPS


Based on your code, can you ensure that you are using output = model(input_data) for the results of native torch and output = model_opt(input_data) for the results of OpenVINO CPU and GPU?



Regards,

Peh


View solution in original post

0 Kudos
4 Replies
Peh_Intel
Moderator
266 Views

Hi Amanda Lee,


Could you try running the test again by using the additional arguments, model_caching and performance hint?


model_opt = torch.compile(model, backend="openvino", options = {"device" : "CPU" , "model_caching" : True, "cache_dir": "./model_cache"})


model_opt = torch.compile(model, backend="openvino", options = {"device" : "CPU" , "config" : {"PERFORMANCE_HINT" : "LATENCY"}})


We do not have performance benchmarks for OpenVINO torch.compile. We only have the performance benchmarks that are generated using the open-source tool within the Intel® Distribution of OpenVINO™ toolkit called benchmark_app.



Regards,

Peh


0 Kudos
AmandaLee
New Contributor I
164 Views

Hi Peh,

 

Thank you, I'll give it another try.

I'd like to clarify that without parameters, the OpenVINO backend might not provide a significant performance improvement for the model. Is that accurate?

 

Regards,

Amanda Lee

0 Kudos
Peh_Intel
Moderator
137 Views

Hi Amanda Lee,


I can see a huge significant performance improvement using OpenVINO backend when running the mobilenet_v2 model on CPU.



Platform:12th Gen Intel(R) Core(TM) i5-12450H

OS: Ubuntu 22.04 (running in VM VirtualBox)

Torch: 2.1.0

OpenVINO: 2024.2.0


Here is my results:

native torch: 37.77 FPS

OpenVINO CPU: 80.89 FPS


Based on your code, can you ensure that you are using output = model(input_data) for the results of native torch and output = model_opt(input_data) for the results of OpenVINO CPU and GPU?



Regards,

Peh


0 Kudos
Peh_Intel
Moderator
23 Views

Hi Amanda Lee,


This thread will no longer be monitored since this issue has been resolved. If you need any additional information from Intel, please submit a new question.



Regards,

Peh


0 Kudos
Reply