Hello...
Is any performance (or..) difference between
....
inferRequest.Infer();
...
vs
...
inferRequest.StartAsync();
if (InferenceEngine::OK != inferRequest.Wait(InferenceEngine::IInferRequest::WaitMode::RESULT_READY))
....
In video cource, the Man says - it is faster, but without explanation.
連結已複製
Hi Sergey,
Thanks for reaching out to us.
InferenceEngine::InferRequest Class Reference page is available at
https://docs.openvinotoolkit.org/2021.1/classInferenceEngine_1_1InferRequest.html
InferenceEngine::InferRequest::Infer infers specified input(s) in synchronous mode.
InferenceEngine::InferRequest::StartAsync starts inference of specified input(s) in asynchronous mode.
For more information, please refer to point 7 in the following section:
To estimate deep learning inference performance on supported devices, I would suggest you use Benchmark C++ Tool. Performance can be measured for two inference modes: synchronous (latency-oriented) and asynchronous (throughput-oriented). You can use -api command-line parameter to define the inference mode.
https://docs.openvinotoolkit.org/2021.1/openvino_inference_engine_samples_benchmark_app_README.html
Regards,
Munesh
