- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi All,
Why? Why? Why?
When use OpenVINO to run ResNet_50_cloth_iter_30000 model:
1 My test env:
Win 10,Intel i7-7700 3.6 GHz,8G Memory;OpenVino ver 1.4
Intel i7-7700 :https://www.intel.cn/content/www/cn/zh/products/processors/core/i7-processors/i7-7700.html
Intel i7-7700 GPU:英特尔® 核芯显卡 630
2 Test Tesult:
CPU :22.5431 ms; CPU :32.396ms
[more info] My Code:
// --------------------------- 1. Load Plugin for inference engine -------------------------------------
//std::string pluginName {"GPU"};
std::string pluginName {"CPU"};
if (MY_DEBUG) std::cout << "[ plugin ] Loading plugin=" << pluginName << std::endl;
InferencePlugin plugin = PluginDispatcher({"../../../lib/intel64", ""}).getPluginByDevice(pluginName);
// -----------------------------------------------------------------------------------------------------
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Try fp16 version of your model for the GPU this is Model Optimizer conversion option).
Also make sure to run sufficient number of iterations
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
+1 for fp16 ! We are seeing 25% to 30% faster execution with fp16.
Also the HD630 is a GT2 GPU with no so many EUs. If you are on an Atom or a dual core slower CPU with a GT4 then the situation is much better for GPU.
In any case GPU will result in a much more power efficient and better load balance than running on 100% MKLDNN code on CPU.
In general even if GPU and CPU path are similar in speed GPU should be preferred as it is a more green option :-)
Cheers,
Nikos
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page