- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Expert,
I've been conducting some tests to compare the FPS of model execution using OpenVINO as a backend engine for torch.compile with that of native Torch code.
- OpenVINO torch.compile did not demonstrate a significant performance advantage over native Torch code execution on either Intel CPU or Intel GPU devices. In some models, OpenVINO compile even resulted in slightly lower FPS compared to native Torch code on Intel CPU.
- How to verify that the OpenVINO engine is indeed being utilized when running the model on Intel CPU/GPU hardware.
- Does Intel have any reference data available for OpenVINO torch.compile performance?
Platform: Meteor Lake Core Ultra7 155H
OS: Ubuntu 22.04
Torch: 2.1.0
OpenVINO: 2024.3
mobilenet_v2:
native torch: 95.70 FPS
OpenVINO CPU: 69.90 FPS
OpenVINO GPU: 59.61 FPS
resnet50:
native torch: 19.51 FPS
OpenVINO CPU: 21.28 FPS
OpenVINO GPU: 22.04 FPS
alexnet:
native torch: 105.66 FPS
OpenVINO CPU: 106.27 FPS
OpenVINO GPU: 108.99 FPS
Code:
import openvino.torch
import torch
model = torch.hub.load("pytorch/vision:v0.10.0", yolov5s, pretrined=True)
model.eval()
model_opt = torch.compile(model, backend="openvino", options = {"device" : "CPU"})
input_data = torch.rand((1,3,224,224))
num_iterations = 1000
start_time = time.time()
for _ in range(num_iterations)
with torch.no_grad():
output = model(input_data)
end_time = time.time()
FPS = num_iterations/(end_time-start_time)
I would greatly appreciate any suggestions or insights. Thank you.
Regards,
Amanda Lee
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Amanda Lee,
I can see a huge significant performance improvement using OpenVINO backend when running the mobilenet_v2 model on CPU.
Platform:12th Gen Intel(R) Core(TM) i5-12450H
OS: Ubuntu 22.04 (running in VM VirtualBox)
Torch: 2.1.0
OpenVINO: 2024.2.0
Here is my results:
native torch: 37.77 FPS
OpenVINO CPU: 80.89 FPS
Based on your code, can you ensure that you are using output = model(input_data) for the results of native torch and output = model_opt(input_data) for the results of OpenVINO CPU and GPU?
Regards,
Peh
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Amanda Lee,
Could you try running the test again by using the additional arguments, model_caching and performance hint?
model_opt = torch.compile(model, backend="openvino", options = {"device" : "CPU" , "model_caching" : True, "cache_dir": "./model_cache"})
model_opt = torch.compile(model, backend="openvino", options = {"device" : "CPU" , "config" : {"PERFORMANCE_HINT" : "LATENCY"}})
We do not have performance benchmarks for OpenVINO torch.compile. We only have the performance benchmarks that are generated using the open-source tool within the Intel® Distribution of OpenVINO™ toolkit called benchmark_app.
Regards,
Peh
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Peh,
Thank you, I'll give it another try.
I'd like to clarify that without parameters, the OpenVINO backend might not provide a significant performance improvement for the model. Is that accurate?
Regards,
Amanda Lee
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Amanda Lee,
I can see a huge significant performance improvement using OpenVINO backend when running the mobilenet_v2 model on CPU.
Platform:12th Gen Intel(R) Core(TM) i5-12450H
OS: Ubuntu 22.04 (running in VM VirtualBox)
Torch: 2.1.0
OpenVINO: 2024.2.0
Here is my results:
native torch: 37.77 FPS
OpenVINO CPU: 80.89 FPS
Based on your code, can you ensure that you are using output = model(input_data) for the results of native torch and output = model_opt(input_data) for the results of OpenVINO CPU and GPU?
Regards,
Peh
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Amanda Lee,
This thread will no longer be monitored since this issue has been resolved. If you need any additional information from Intel, please submit a new question.
Regards,
Peh
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page