Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6584 讨论

PyTorch benchmark via “torch.compile” with OpenVINO

AmandaLee
新分销商 I
2,884 次查看

Hi Expert,

 

I've been conducting some tests to compare the FPS of model execution using OpenVINO as a backend engine for torch.compile with that of native Torch code.

 

  1. OpenVINO torch.compile did not demonstrate a significant performance advantage over native Torch code execution on either Intel CPU or Intel GPU devices. In some models, OpenVINO compile even resulted in slightly lower FPS compared to native Torch code on Intel CPU.
  2. How to verify that the OpenVINO engine is indeed being utilized when running the model on Intel CPU/GPU hardware.
  3. Does Intel have any reference data available for OpenVINO torch.compile performance?

 

Platform: Meteor Lake Core Ultra7 155H

OS: Ubuntu 22.04

Torch: 2.1.0

OpenVINO: 2024.3

 

mobilenet_v2:

native torch: 95.70 FPS

OpenVINO CPU: 69.90 FPS

OpenVINO GPU: 59.61 FPS

resnet50:

native torch: 19.51 FPS

OpenVINO CPU: 21.28 FPS

OpenVINO GPU: 22.04 FPS

alexnet:

native torch: 105.66 FPS

OpenVINO CPU: 106.27 FPS

OpenVINO GPU: 108.99 FPS

 

Code:

 

import openvino.torch
import torch

model = torch.hub.load("pytorch/vision:v0.10.0", yolov5s, pretrined=True)
model.eval()

model_opt = torch.compile(model, backend="openvino", options = {"device" : "CPU"})
input_data = torch.rand((1,3,224,224))

num_iterations = 1000
start_time = time.time()
for _ in range(num_iterations)
    with torch.no_grad():
        output = model(input_data)

end_time = time.time()
FPS = num_iterations/(end_time-start_time)

 

 

 

I would greatly appreciate any suggestions or insights. Thank you.

 

Regards,

Amanda Lee

0 项奖励
1 解答
Peh_Intel
主持人
2,696 次查看

Hi Amanda Lee,


I can see a huge significant performance improvement using OpenVINO backend when running the mobilenet_v2 model on CPU.



Platform:12th Gen Intel(R) Core(TM) i5-12450H

OS: Ubuntu 22.04 (running in VM VirtualBox)

Torch: 2.1.0

OpenVINO: 2024.2.0


Here is my results:

native torch: 37.77 FPS

OpenVINO CPU: 80.89 FPS


Based on your code, can you ensure that you are using output = model(input_data) for the results of native torch and output = model_opt(input_data) for the results of OpenVINO CPU and GPU?



Regards,

Peh


在原帖中查看解决方案

0 项奖励
4 回复数
Peh_Intel
主持人
2,826 次查看

Hi Amanda Lee,


Could you try running the test again by using the additional arguments, model_caching and performance hint?


model_opt = torch.compile(model, backend="openvino", options = {"device" : "CPU" , "model_caching" : True, "cache_dir": "./model_cache"})


model_opt = torch.compile(model, backend="openvino", options = {"device" : "CPU" , "config" : {"PERFORMANCE_HINT" : "LATENCY"}})


We do not have performance benchmarks for OpenVINO torch.compile. We only have the performance benchmarks that are generated using the open-source tool within the Intel® Distribution of OpenVINO™ toolkit called benchmark_app.



Regards,

Peh


0 项奖励
AmandaLee
新分销商 I
2,724 次查看

Hi Peh,

 

Thank you, I'll give it another try.

I'd like to clarify that without parameters, the OpenVINO backend might not provide a significant performance improvement for the model. Is that accurate?

 

Regards,

Amanda Lee

0 项奖励
Peh_Intel
主持人
2,697 次查看

Hi Amanda Lee,


I can see a huge significant performance improvement using OpenVINO backend when running the mobilenet_v2 model on CPU.



Platform:12th Gen Intel(R) Core(TM) i5-12450H

OS: Ubuntu 22.04 (running in VM VirtualBox)

Torch: 2.1.0

OpenVINO: 2024.2.0


Here is my results:

native torch: 37.77 FPS

OpenVINO CPU: 80.89 FPS


Based on your code, can you ensure that you are using output = model(input_data) for the results of native torch and output = model_opt(input_data) for the results of OpenVINO CPU and GPU?



Regards,

Peh


0 项奖励
Peh_Intel
主持人
2,583 次查看

Hi Amanda Lee,


This thread will no longer be monitored since this issue has been resolved. If you need any additional information from Intel, please submit a new question.



Regards,

Peh


0 项奖励
回复