- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello...
Is any performance (or..) difference between
....
inferRequest.Infer();
...
vs
...
inferRequest.StartAsync();
if (InferenceEngine::OK != inferRequest.Wait(InferenceEngine::IInferRequest::WaitMode::RESULT_READY))
....
In video cource, the Man says - it is faster, but without explanation.
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Sergey,
Thanks for reaching out to us.
InferenceEngine::InferRequest Class Reference page is available at
https://docs.openvinotoolkit.org/2021.1/classInferenceEngine_1_1InferRequest.html
InferenceEngine::InferRequest::Infer infers specified input(s) in synchronous mode.
InferenceEngine::InferRequest::StartAsync starts inference of specified input(s) in asynchronous mode.
For more information, please refer to point 7 in the following section:
To estimate deep learning inference performance on supported devices, I would suggest you use Benchmark C++ Tool. Performance can be measured for two inference modes: synchronous (latency-oriented) and asynchronous (throughput-oriented). You can use -api command-line parameter to define the inference mode.
https://docs.openvinotoolkit.org/2021.1/openvino_inference_engine_samples_benchmark_app_README.html
Regards,
Munesh
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Sergey.
This thread will no longer be monitored since we have provided information and suggestion. If you need any additional information from Intel, please submit a new question.
Regards,
Munesh
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page