- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Expert,
I've been conducting some tests to compare the FPS of model execution using OpenVINO as a backend engine for torch.compile with that of native Torch code.
- OpenVINO torch.compile did not demonstrate a significant performance advantage over native Torch code execution on either Intel CPU or Intel GPU devices. In some models, OpenVINO compile even resulted in slightly lower FPS compared to native Torch code on Intel CPU.
- How to verify that the OpenVINO engine is indeed being utilized when running the model on Intel CPU/GPU hardware.
- Does Intel have any reference data available for OpenVINO torch.compile performance?
Platform: Meteor Lake Core Ultra7 155H
OS: Ubuntu 22.04
Torch: 2.1.0
OpenVINO: 2024.3
mobilenet_v2:
native torch: 95.70 FPS
OpenVINO CPU: 69.90 FPS
OpenVINO GPU: 59.61 FPS
resnet50:
native torch: 19.51 FPS
OpenVINO CPU: 21.28 FPS
OpenVINO GPU: 22.04 FPS
alexnet:
native torch: 105.66 FPS
OpenVINO CPU: 106.27 FPS
OpenVINO GPU: 108.99 FPS
Code:
import openvino.torch
import torch
model = torch.hub.load("pytorch/vision:v0.10.0", yolov5s, pretrined=True)
model.eval()
model_opt = torch.compile(model, backend="openvino", options = {"device" : "CPU"})
input_data = torch.rand((1,3,224,224))
num_iterations = 1000
start_time = time.time()
for _ in range(num_iterations)
with torch.no_grad():
output = model(input_data)
end_time = time.time()
FPS = num_iterations/(end_time-start_time)
I would greatly appreciate any suggestions or insights. Thank you.
Regards,
Amanda Lee
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Amanda Lee,
Could you try running the test again by using the additional arguments, model_caching and performance hint?
model_opt = torch.compile(model, backend="openvino", options = {"device" : "CPU" , "model_caching" : True, "cache_dir": "./model_cache"})
model_opt = torch.compile(model, backend="openvino", options = {"device" : "CPU" , "config" : {"PERFORMANCE_HINT" : "LATENCY"}})
We do not have performance benchmarks for OpenVINO torch.compile. We only have the performance benchmarks that are generated using the open-source tool within the Intel® Distribution of OpenVINO™ toolkit called benchmark_app.
Regards,
Peh
![](/skins/images/3344F5B3B76C91485ED0E980FD0CA95E/responsive_peak/images/icon_anonymous_message.png)
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page