- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
when doing inference by GPU plugin,I test my program in diferent PCs and find the time cost is so differnt. I want to
find the major factor affecting the inference time,can any one give me some adcice?
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello Liang Heng,
What are the GPU models in the systems you are testing?
I think you will find in most cases GPU inference will be faster on GPUs with more EUs and/or higher GPU clock but there are many other factors too.
For details on GPU EU/Clock please refer to https://ark.intel.com/content/www/us/en/ark.html
Cheers,
Nikos
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear liang, heng,
In your experiments I assume that you're using the same image(s) each time. What nikos said is correct but also depending on the model you are using and the image sizes you are passing in, the GPU plugin optimizes models for certain ideal kernel sizes. It's best to feed in an image size which is optimal for the model.
You can also use the benchmark_app to perform experiments.
Hope it helps,
Thanks,
Shubha

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page