Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6447 Discussions

PyTorch benchmark via “torch.compile” with OpenVINO

AmandaLee
New Contributor I
121 Views

Hi Expert,

 

I've been conducting some tests to compare the FPS of model execution using OpenVINO as a backend engine for torch.compile with that of native Torch code.

 

  1. OpenVINO torch.compile did not demonstrate a significant performance advantage over native Torch code execution on either Intel CPU or Intel GPU devices. In some models, OpenVINO compile even resulted in slightly lower FPS compared to native Torch code on Intel CPU.
  2. How to verify that the OpenVINO engine is indeed being utilized when running the model on Intel CPU/GPU hardware.
  3. Does Intel have any reference data available for OpenVINO torch.compile performance?

 

Platform: Meteor Lake Core Ultra7 155H

OS: Ubuntu 22.04

Torch: 2.1.0

OpenVINO: 2024.3

 

mobilenet_v2:

native torch: 95.70 FPS

OpenVINO CPU: 69.90 FPS

OpenVINO GPU: 59.61 FPS

resnet50:

native torch: 19.51 FPS

OpenVINO CPU: 21.28 FPS

OpenVINO GPU: 22.04 FPS

alexnet:

native torch: 105.66 FPS

OpenVINO CPU: 106.27 FPS

OpenVINO GPU: 108.99 FPS

 

Code:

 

import openvino.torch
import torch

model = torch.hub.load("pytorch/vision:v0.10.0", yolov5s, pretrined=True)
model.eval()

model_opt = torch.compile(model, backend="openvino", options = {"device" : "CPU"})
input_data = torch.rand((1,3,224,224))

num_iterations = 1000
start_time = time.time()
for _ in range(num_iterations)
    with torch.no_grad():
        output = model(input_data)

end_time = time.time()
FPS = num_iterations/(end_time-start_time)

 

 

 

I would greatly appreciate any suggestions or insights. Thank you.

 

Regards,

Amanda Lee

0 Kudos
1 Reply
Peh_Intel
Moderator
63 Views

Hi Amanda Lee,


Could you try running the test again by using the additional arguments, model_caching and performance hint?


model_opt = torch.compile(model, backend="openvino", options = {"device" : "CPU" , "model_caching" : True, "cache_dir": "./model_cache"})


model_opt = torch.compile(model, backend="openvino", options = {"device" : "CPU" , "config" : {"PERFORMANCE_HINT" : "LATENCY"}})


We do not have performance benchmarks for OpenVINO torch.compile. We only have the performance benchmarks that are generated using the open-source tool within the Intel® Distribution of OpenVINO™ toolkit called benchmark_app.



Regards,

Peh


0 Kudos
Reply